diff --git a/.github/scripts/badwords.txt b/.github/scripts/badwords.txt index bc26bb02d7..3a383ec39e 100644 --- a/.github/scripts/badwords.txt +++ b/.github/scripts/badwords.txt @@ -82,6 +82,7 @@ file names\b:filenames \b([02-9]|[1-9][0-9]+) bit\b: NN-bit [0-9]+-bits:NN bits or NN-bit \bvery\b:rephrase using an alternative word +\bjust\b:rephrase using an alternative word \bCurl\b=curl \bcURL\b=curl \bLibcurl\b=libcurl diff --git a/SECURITY.md b/SECURITY.md index ddf6415c00..e579ebb6e6 100644 --- a/SECURITY.md +++ b/SECURITY.md @@ -10,8 +10,8 @@ Read our [Vulnerability Disclosure Policy](docs/VULN-DISCLOSURE-POLICY.md). ## Reporting a Vulnerability -If you have found or just suspect a security problem somewhere in curl or -libcurl, [report it](https://curl.se/dev/vuln-disclosure.html)! +If you have found or suspect a security problem somewhere in curl or libcurl, +[report it](https://curl.se/dev/vuln-disclosure.html)! We treat security issues with confidentiality until controlled and disclosed responsibly. diff --git a/docs/BUGS.md b/docs/BUGS.md index dde3a71a6e..761f26e442 100644 --- a/docs/BUGS.md +++ b/docs/BUGS.md @@ -91,10 +91,10 @@ Showing us a real source code example repeating your problem is the best way to get our attention and it greatly increases our chances to understand your problem and to work on a fix (if we agree it truly is a problem). -Lots of problems that appear to be libcurl problems are actually just abuses -of the libcurl API or other malfunctions in your applications. It is advised -that you run your problematic program using a memory debug tool like valgrind -or similar before you post memory-related or "crashing" problems to us. +Lots of problems that appear to be libcurl problems are instead abuses of the +libcurl API or other malfunctions in your applications. It is advised that you +run your problematic program using a memory debug tool like valgrind or +similar before you post memory-related or "crashing" problems to us. ## Who fixes the problems @@ -106,11 +106,11 @@ All developers that take on reported bugs do this on a voluntary basis. We do it out of an ambition to keep curl and libcurl excellent products and out of pride. -Please do not assume that you can just lump over something to us and it then +Please do not assume that you can lump over something to us and it then magically gets fixed after some given time. Most often we need feedback and -help to understand what you have experienced and how to repeat a problem. -Then we may only be able to assist YOU to debug the problem and to track down -the proper fix. +help to understand what you have experienced and how to repeat a problem. Then +we may only be able to assist YOU to debug the problem and to track down the +proper fix. We get reports from many people every month and each report can take a considerable amount of time to really go to the bottom with. @@ -165,11 +165,10 @@ Even if you cannot immediately upgrade your application/system to run the latest curl version, you can most often at least run a test version or experimental build or similar, to get this confirmed or not. -At times people insist that they cannot upgrade to a modern curl version, but -instead, they "just want the bug fixed". That is fine, just do not count on us -spending many cycles on trying to identify which single commit, if that is -even possible, that at some point in the past fixed the problem you are now -experiencing. +At times people insist that they cannot upgrade to a modern curl version, they +only "want the bug fixed". That is fine, but do not count on us spending many +cycles on trying to identify which single commit, if that is even possible, +that at some point in the past fixed the problem you are now experiencing. Security wise, it is almost always a bad idea to lag behind the current curl versions by a lot. We keep discovering and reporting security problems diff --git a/docs/CONTRIBUTE.md b/docs/CONTRIBUTE.md index 1449cc191e..abb70f2af7 100644 --- a/docs/CONTRIBUTE.md +++ b/docs/CONTRIBUTE.md @@ -56,7 +56,7 @@ Source code, the man pages, the [INTERNALS document](https://curl.se/dev/internals.html), [TODO](https://curl.se/docs/todo.html), [KNOWN_BUGS](https://curl.se/docs/knownbugs.html) and the [most recent -changes](https://curl.se/dev/sourceactivity.html) in git. Just lurking on the +changes](https://curl.se/dev/sourceactivity.html) in git. Lurking on the [curl-library mailing list](https://curl.se/mail/list.cgi?list=curl-library) gives you a lot of insights on what's going on right now. Asking there is a good idea too. @@ -145,8 +145,8 @@ then come on GitHub. Your changes be reviewed and discussed and you are expected to correct flaws pointed out and update accordingly, or the change risks stalling and -eventually just getting deleted without action. As a submitter of a change, -you are the owner of that change until it has been merged. +eventually getting deleted without action. As a submitter of a change, you are +the owner of that change until it has been merged. Respond on the list or on GitHub about the change and answer questions and/or fix nits/flaws. This is important. We take lack of replies as a sign that you @@ -169,8 +169,8 @@ ways. [See the CI document for more information](https://github.com/curl/curl/blob/master/docs/tests/CI.md). Sometimes the tests fail due to a dependency service temporarily being offline -or otherwise unavailable, e.g. package downloads. In this case you can just -try to update your pull requests to rerun the tests later as described below. +or otherwise unavailable, e.g. package downloads. In this case you can try to +update your pull requests to rerun the tests later as described below. You can update your pull requests by pushing new commits or force-pushing changes to existing commits. Force-pushing an amended commit without any @@ -285,8 +285,9 @@ If you are a frequent contributor, you may be given push access to the git repository and then you are able to push your changes straight into the git repository instead of sending changes as pull requests or by mail as patches. -Just ask if this is what you would want. You are required to have posted -several high quality patches first, before you can be granted push access. +Feel free to ask for this, if this is what you want. You are required to have +posted several high quality patches first, before you can be granted push +access. ## Useful resources @@ -320,13 +321,13 @@ You must also double-check the findings carefully before reporting them to us to validate that the issues are indeed existing and working exactly as the AI says. AI-based tools frequently generate inaccurate or fabricated results. -Further: it is *rarely* a good idea to just copy and paste an AI generated -report to the project. Those generated reports typically are too wordy and -rarely to the point (in addition to the common fabricated details). If you -actually find a problem with an AI and you have verified it yourself to be -true: write the report yourself and explain the problem as you have learned -it. This makes sure the AI-generated inaccuracies and invented issues are -filtered out early before they waste more people's time. +Further: it is *rarely* a good idea to copy and paste an AI generated report +to the project. Those generated reports typically are too wordy and rarely to +the point (in addition to the common fabricated details). If you actually find +a problem with an AI and you have verified it yourself to be true: write the +report yourself and explain the problem as you have learned it. This makes +sure the AI-generated inaccuracies and invented issues are filtered out early +before they waste more people's time. As we take security reports seriously, we investigate each report with priority. This work is both time and energy consuming and pulls us away from diff --git a/docs/CURLDOWN.md b/docs/CURLDOWN.md index 6726b3946c..ce19b5f5d6 100644 --- a/docs/CURLDOWN.md +++ b/docs/CURLDOWN.md @@ -119,7 +119,7 @@ syntax: ~~~ Quoted source code should start with `~~~c` and end with `~~~` while regular -quotes can start with `~~~` or just be indented with 4 spaces. +quotes can start with `~~~` or be indented with 4 spaces. Headers at top-level `#` get converted to `.SH`. @@ -134,8 +134,7 @@ Write italics like: This is *italics*. Due to how man pages do not support backticks especially formatted, such -occurrences in the source are instead just using italics in the generated -output: +occurrences in the source are instead using italics in the generated output: This `word` appears in italics. diff --git a/docs/EARLY-RELEASE.md b/docs/EARLY-RELEASE.md index f5efb3d442..8ec74c3e20 100644 --- a/docs/EARLY-RELEASE.md +++ b/docs/EARLY-RELEASE.md @@ -27,7 +27,7 @@ in the git master branch. An early patch release means that we ship a new, complete and full release called `major.minor.patch` where the `patch` part is increased by one since the previous release. A curl release is a curl release. There is no small or -big and we never release just a patch. There is only "release". +big and we never ship stand-alone separate patches. There is only "release". ## Questions to ask diff --git a/docs/ECH.md b/docs/ECH.md index 129378ad75..391e6f2b6d 100644 --- a/docs/ECH.md +++ b/docs/ECH.md @@ -410,9 +410,9 @@ for ECH when DoH is not used by curl - if a system stub resolver supports DoT or DoH, then, considering only ECH and the network threat model, it would make sense for curl to support ECH without curl itself using DoH. The author for example uses a combination of stubby+unbound as the system resolver listening -on localhost:53, so would fit this use-case. That said, it is unclear if -this is a niche that is worth trying to address. (The author is just as happy to -let curl use DoH to talk to the same public recursive that stubby might use:-) +on localhost:53, so would fit this use-case. That said, it is unclear if this +is a niche that is worth trying to address. (The author is happy to let curl +use DoH to talk to the same public recursive that stubby might use:-) Assuming for the moment this is a use-case we would like to support, then if DoH is not being used by curl, it is not clear at this time how to provide @@ -432,14 +432,6 @@ Our current conclusion is that doing the above is likely best left until we have some experience with the "using DoH" approach, so we are going to punt on this for now. -### Debugging - -Just a note to self as remembering this is a nuisance: - -```sh -LD_LIBRARY_PATH=$HOME/code/openssl:./lib/.libs gdb ./src/.libs/curl -``` - ### Localhost testing It can be useful to be able to run against a localhost OpenSSL ``s_server`` @@ -467,9 +459,9 @@ cd $HOME/code/curl/ ### Automated use of ``retry_configs`` not supported so far... As of now we have not added support for using ``retry_config`` handling in the -application - for a command line tool, one can just use ``dig`` (or ``kdig``) -to get the HTTPS RR and pass the ECHConfigList from that on the command line, -if needed, or one can access the value from command line output in verbose more +application - for a command line tool, one can use ``dig`` (or ``kdig``) to +get the HTTPS RR and pass the ECHConfigList from that on the command line, if +needed, or one can access the value from command line output in verbose more and then reuse that in another invocation. Both our OpenSSL fork and BoringSSL/AWS-LC have APIs for both controlling GREASE diff --git a/docs/FAQ.md b/docs/FAQ.md index d2cf9c8312..a3e9a9fa37 100644 --- a/docs/FAQ.md +++ b/docs/FAQ.md @@ -425,8 +425,8 @@ about bindings on the curl-library list too, but be prepared that people on that list may not know anything about bindings. In December 2025 there were around **60** different [interfaces -available](https://curl.se/libcurl/bindings.html) for just about all the -languages you can imagine. +available](https://curl.se/libcurl/bindings.html) for almost any language you +can imagine. ## What about SOAP, WebDAV, XML-RPC or similar protocols over HTTP? @@ -435,8 +435,8 @@ protocol that is built on top of HTTP. Protocols such as SOAP, WebDAV and XML-RPC are all such ones. You can use `-X` to set custom requests and -H to set custom headers (or replace internally generated ones). -Using libcurl is of course just as good and you would just use the proper -library options to do the same. +Using libcurl of course also works and you would use the proper library +options to do the same. ## How do I POST with a different Content-Type? @@ -488,14 +488,13 @@ individuals have ever tried. ## Does curl support JavaScript or PAC (automated proxy config)? Many webpages do magic stuff using embedded JavaScript. curl and libcurl have -no built-in support for that, so it will be treated just like any other -contents. +no built-in support for that, so it is treated like any other contents. `.pac` files are a Netscape invention and are sometimes used by organizations -to allow them to differentiate which proxies to use. The `.pac` contents is -just a JavaScript program that gets invoked by the browser and that returns -the name of the proxy to connect to. Since curl does not support JavaScript, -it cannot support .pac proxy configuration either. +to allow them to differentiate which proxies to use. The `.pac` contents is a +JavaScript program that gets invoked by the browser and that returns the name +of the proxy to connect to. Since curl does not support JavaScript, it cannot +support .pac proxy configuration either. Some workarounds usually suggested to overcome this JavaScript dependency: @@ -601,7 +600,7 @@ URL syntax which for SFTP might look similar to: curl -O -u user:password sftp://example.com/~/file.txt -and for SCP it is just a different protocol prefix: +and for SCP it is a different protocol prefix: curl -O -u user:password scp://example.com/~/file.txt @@ -624,7 +623,7 @@ the protocol part with a space as in `" https://example.com/"`. In normal circumstances, `-X` should hardly ever be used. By default you use curl without explicitly saying which request method to use -when the URL identifies an HTTP transfer. If you just pass in a URL like `curl +when the URL identifies an HTTP transfer. If you pass in a URL like `curl https://example.com` it will use GET. If you use `-d` or `-F`, curl will use POST, `-I` will cause a HEAD and `-T` will make it a PUT. @@ -929,7 +928,7 @@ In either case, curl should now be looking for the correct file. Unplugging a cable is not an error situation. The TCP/IP protocol stack was designed to be fault tolerant, so even though there may be a physical break -somewhere the connection should not be affected, just possibly delayed. +somewhere the connection should not be affected, but possibly delayed. Eventually, the physical break will be fixed or the data will be re-routed around the physical problem through another path. @@ -1033,7 +1032,7 @@ WriteMemoryCallback(void *ptr, size_t size, size_t nmemb, void *data) ## How do I fetch multiple files with libcurl? -libcurl has excellent support for transferring multiple files. You should just +libcurl has excellent support for transferring multiple files. You should repeatedly set new URLs with `curl_easy_setopt()` and then transfer it with `curl_easy_perform()`. The handle you get from curl_easy_init() is not only reusable, but you are even encouraged to reuse it if you can, as that will @@ -1274,8 +1273,8 @@ never exposed to the outside. # License curl and libcurl are released under an MIT/X derivative license. The license -is liberal and should not impose a problem for your project. This section is -just a brief summary for the cases we get the most questions. +is liberal and should not impose a problem for your project. This section is a +brief summary for the cases we get the most questions. We are not lawyers and this is not legal advice. You should probably consult one if you want true and accurate legal insights without our prejudice. Note @@ -1384,8 +1383,8 @@ PHP/CURL was initially written by Sterling Hughes. Yes. -After a transfer, you just set new options in the handle and make another -transfer. This will make libcurl reuse the same connection if it can. +After a transfer, you set new options in the handle and make another transfer. +This will make libcurl reuse the same connection if it can. ## Does PHP/CURL have dependencies? diff --git a/docs/GOVERNANCE.md b/docs/GOVERNANCE.md index 902b09da1f..bedf6796fd 100644 --- a/docs/GOVERNANCE.md +++ b/docs/GOVERNANCE.md @@ -21,7 +21,7 @@ what the project and the general user population wants and expects from us. ## Legal entity -There is no legal entity. The curl project is just a bunch of people scattered +There is no legal entity. The curl project is a bunch of people scattered around the globe with the common goal to produce source code that creates great products. We are not part of any umbrella organization and we are not located in any specific country. We are totally independent. @@ -110,7 +110,7 @@ developers familiar with the curl project. The security team works best when it consists of a small set of active persons. We invite new members when the team seems to need it, and we also expect to retire security team members as they "drift off" from the project or -just find themselves unable to perform their duties there. +find themselves unable to perform their duties there. ## Core team diff --git a/docs/HISTORY.md b/docs/HISTORY.md index 1816697500..a6a723af62 100644 --- a/docs/HISTORY.md +++ b/docs/HISTORY.md @@ -8,9 +8,9 @@ SPDX-License-Identifier: curl Towards the end of 1996, Daniel Stenberg was spending time writing an IRC bot for an Amiga related channel on EFnet. He then came up with the idea to make -currency-exchange calculations available to Internet Relay Chat (IRC) -users. All the necessary data were published on the Web; he just needed to -automate their retrieval. +currency-exchange calculations available to Internet Relay Chat (IRC) users. +All the necessary data were published on the Web; he only needed to automate +their retrieval. ## 1996 @@ -18,9 +18,9 @@ On November 11, 1996 the Brazilian developer Rafael Sagula wrote and released HttpGet version 0.1. Daniel extended this existing command-line open-source tool. After a few minor -adjustments, it did just what he needed. The first release with Daniel's -additions was 0.2, released on December 17, 1996. Daniel quickly became the -new maintainer of the project. +adjustments, it did what he needed. The first release with Daniel's additions +was 0.2, released on December 17, 1996. Daniel quickly became the new +maintainer of the project. ## 1997 @@ -309,7 +309,7 @@ June: support for multiplexing with HTTP/2 August: support for HTTP/2 server push September: started "everything curl". A separate stand-alone book documenting -curl and related info in perhaps a more tutorial style rather than just a +curl and related info in perhaps a more tutorial style rather than a reference, December: Public Suffix List diff --git a/docs/HTTP-COOKIES.md b/docs/HTTP-COOKIES.md index 49663b05ae..2a32aae605 100644 --- a/docs/HTTP-COOKIES.md +++ b/docs/HTTP-COOKIES.md @@ -98,8 +98,8 @@ Field number, what type and example data and the meaning of it: ## Cookies with curl the command line tool -curl has a full cookie "engine" built in. If you just activate it, you can -have curl receive and send cookies exactly as mandated in the specs. +curl has a full cookie "engine" built in. If you activate it, you can have +curl receive and send cookies exactly as mandated in the specs. Command line options: diff --git a/docs/HTTP3.md b/docs/HTTP3.md index ae5e557bc1..8be87234d8 100644 --- a/docs/HTTP3.md +++ b/docs/HTTP3.md @@ -29,7 +29,7 @@ HTTP/3 support in curl is considered **EXPERIMENTAL** until further notice when built to use *quiche*. Only the *ngtcp2* backend is not experimental. Further development and tweaking of the HTTP/3 support in curl happens in the -master branch using pull-requests, just like ordinary changes. +master branch using pull-requests like ordinary changes. To fix before we remove the experimental label: @@ -305,9 +305,9 @@ handshake or time out. Note that all this happens in addition to IP version happy eyeballing. If the name resolution for the server gives more than one IP address, curl tries all -those until one succeeds - just as with all other protocols. If those IP -addresses contain both IPv6 and IPv4, those attempts happen, delayed, in -parallel (the actual eyeballing). +those until one succeeds - as with all other protocols. If those IP addresses +contain both IPv6 and IPv4, those attempts happen, delayed, in parallel (the +actual eyeballing). ## Known Bugs @@ -322,8 +322,7 @@ development and experimenting. An existing local HTTP/1.1 server that hosts files. Preferably also a few huge ones. You can easily create huge local files like `truncate -s=8G 8GB` - they -are huge but do not occupy that much space on disk since they are just big -holes. +are huge but do not occupy that much space on disk since they are big holes. In a Debian setup you can install apache2. It runs on port 80 and has a document root in `/var/www/html`. Download the 8GB file from apache with `curl @@ -350,8 +349,8 @@ Get, build and install nghttp2: % make && make install Run the local h3 server on port 9443, make it proxy all traffic through to -HTTP/1 on localhost port 80. For local toying, we can just use the test cert -that exists in curl's test dir. +HTTP/1 on localhost port 80. For local toying, we can use the test cert that +exists in curl's test dir. % CERT=/path/to/stunnel.pem % $HOME/bin/nghttpx $CERT $CERT --backend=localhost,80 \ diff --git a/docs/INSTALL-CMAKE.md b/docs/INSTALL-CMAKE.md index 9c92e4b4ca..db73b0023a 100644 --- a/docs/INSTALL-CMAKE.md +++ b/docs/INSTALL-CMAKE.md @@ -155,8 +155,8 @@ assumes that CMake generates `Makefile`: # CMake usage -Just as curl can be built and installed using CMake, it can also be used from -CMake. +This section describes how to locate and use curl/libcurl from CMake-based +projects. ## Using `find_package` diff --git a/docs/IPFS.md b/docs/IPFS.md index 64e0c53b50..2cf64543b9 100644 --- a/docs/IPFS.md +++ b/docs/IPFS.md @@ -75,7 +75,7 @@ curl ipfs://bafybeigagd5nmnn2iys2f3doro7ydrevyr2mzarwidgadawmamiteydbzi ``` With the IPFS protocol way of asking a file, curl still needs to know the -gateway. curl essentially just rewrites the IPFS based URL to a gateway URL. +gateway. curl essentially rewrites the IPFS based URL to a gateway URL. ### IPFS_GATEWAY environment variable diff --git a/docs/KNOWN_RISKS.md b/docs/KNOWN_RISKS.md index 0b22ce0b0c..dd3f5c97ea 100644 --- a/docs/KNOWN_RISKS.md +++ b/docs/KNOWN_RISKS.md @@ -26,8 +26,8 @@ to you from untrusted sources. curl can do a lot of things, and you should only ask it do things you want and deem correct. -Even just accepting just the URL part without careful vetting might make curl -do things you do not like. Like accessing internal hosts, like connecting to +Even accepting only the URL part without careful vetting might make curl do +things you do not like. Like accessing internal hosts, like connecting to rogue servers that redirect to even weirder places, like using ports or protocols that play tricks on you. diff --git a/docs/MAIL-ETIQUETTE.md b/docs/MAIL-ETIQUETTE.md index 4c2f95f38b..7a32e18a0f 100644 --- a/docs/MAIL-ETIQUETTE.md +++ b/docs/MAIL-ETIQUETTE.md @@ -39,8 +39,8 @@ way to read the reply, but to ask the one person the question. The one person consequently gets overloaded with mail. If you really want to contact an individual and perhaps pay for his or her -services, by all means go ahead, but if it is just another curl question, take -it to a suitable list instead. +services, by all means go ahead, but if it is another curl question, take it +to a suitable list instead. ### Subscription Required @@ -150,8 +150,8 @@ individuals. There is no way to undo a sent email. When sending emails to a curl mailing list, do not include sensitive information such as usernames and passwords; use fake ones, temporary ones or -just remove them completely from the mail. Note that this includes base64 -encoded HTTP Basic auth headers. +remove them completely from the mail. Note that this includes base64 encoded +HTTP Basic auth headers. This public nature of the curl mailing lists makes automatically inserted mail footers about mails being "private" or "only meant for the recipient" or @@ -167,14 +167,14 @@ the lists. Many mail programs and web archivers use information within mails to keep them together as "threads", as collections of posts that discuss a certain subject. -If you do not intend to reply on the same or similar subject, do not just hit -reply on an existing mail and change the subject, create a new mail. +If you do not intend to reply on the same or similar subject, do not hit reply +on an existing mail and change the subject, create a new mail. ### Reply to the List When replying to a message from the list, make sure that you do "group reply" -or "reply to all", and not just reply to the author of the single mail you -reply to. +or "reply to all", and not reply to the author of the single mail you reply +to. We are actively discouraging replying to the single person by setting the correct field in outgoing mails back asking for replies to get sent to the @@ -222,8 +222,8 @@ mails to your friends. We speak plain text mails. ### Quoting -Quote as little as possible. Just enough to provide the context you cannot -leave out. +Quote as little as possible. Enough to provide the context you cannot leave +out. ### Digest diff --git a/docs/MANUAL.md b/docs/MANUAL.md index ff66cc5a72..17ecc145b2 100644 --- a/docs/MANUAL.md +++ b/docs/MANUAL.md @@ -96,8 +96,8 @@ or specify them with the `-u` flag like ### FTPS -It is just like for FTP, but you may also want to specify and use SSL-specific -options for certificates etc. +It is like FTP, but you may also want to specify and use SSL-specific options +for certificates etc. Note that using `FTPS://` as prefix is the *implicit* way as described in the standards while the recommended *explicit* way is done by using `FTP://` and @@ -660,7 +660,7 @@ incoming connections. curl ftp.example.com If the server, for example, is behind a firewall that does not allow -connections on ports other than 21 (or if it just does not support the `PASV` +connections on ports other than 21 (or if it does not support the `PASV` command), the other way to do it is to use the `PORT` command and instruct the server to connect to the client on the given IP number and port (as parameters to the PORT command). @@ -855,8 +855,8 @@ therefore most Unix programs do not read this file unless it is only readable by yourself (curl does not care though). curl supports `.netrc` files if told to (using the `-n`/`--netrc` and -`--netrc-optional` options). This is not restricted to just FTP, so curl can -use it for all protocols where authentication is used. +`--netrc-optional` options). This is not restricted to FTP, so curl can use it +for all protocols where authentication is used. A simple `.netrc` file could look something like: @@ -936,8 +936,8 @@ are persistent. As is mentioned above, you can download multiple files with one command line by simply adding more URLs. If you want those to get saved to a local file -instead of just printed to stdout, you need to add one save option for each -URL you specify. Note that this also goes for the `-O` option (but not +instead of printed to stdout, you need to add one save option for each URL you +specify. Note that this also goes for the `-O` option (but not `--remote-name-all`). For example: get two files and use `-O` for the first and a custom file diff --git a/docs/SECURITY-ADVISORY.md b/docs/SECURITY-ADVISORY.md index 14e7d96266..4f3e1df2c9 100644 --- a/docs/SECURITY-ADVISORY.md +++ b/docs/SECURITY-ADVISORY.md @@ -50,9 +50,9 @@ generated automatically using those files. ## Document format -The easy way is to start with a recent previously published advisory and just -blank out old texts and save it using a new name. Save the subtitles and -general layout. +The easy way is to start with a recent previously published advisory and blank +out old texts and save it using a new name. Save the subtitles and general +layout. Some details and metadata are extracted from this document so it is important to stick to the existing format. diff --git a/docs/TODO.md b/docs/TODO.md index 6bd63da6b5..402f6066c8 100644 --- a/docs/TODO.md +++ b/docs/TODO.md @@ -283,8 +283,8 @@ See [curl issue 1508](https://github.com/curl/curl/issues/1508) ## Provide the error body from a CONNECT response When curl receives a body response from a CONNECT request to a proxy, it -always just reads and ignores it. It would make some users happy if curl -instead optionally would be able to make that responsible available. Via a new +always reads and ignores it. It would make some users happy if curl instead +optionally would be able to make that responsible available. Via a new callback? Through some other means? See [curl issue 9513](https://github.com/curl/curl/issues/9513) @@ -454,7 +454,7 @@ Currently the SMB authentication uses NTLMv1. ## Create remote directories Support for creating remote directories when uploading a file to a directory -that does not exist on the server, just like `--ftp-create-dirs`. +that does not exist on the server, like `--ftp-create-dirs`. # FILE @@ -662,8 +662,8 @@ the new transfer to the existing one. The SFTP code in libcurl checks the file size *before* a transfer starts and then proceeds to transfer exactly that amount of data. If the remote file grows while the transfer is in progress libcurl does not notice and does not -adapt. The OpenSSH SFTP command line tool does and libcurl could also just -attempt to download more to see if there is more to get... +adapt. The OpenSSH SFTP command line tool does and libcurl could also attempt +to download more to see if there is more to get... [curl issue 4344](https://github.com/curl/curl/issues/4344) @@ -958,8 +958,8 @@ test tools built with either OpenSSL or GnuTLS ## more protocols supported -Extend the test suite to include more protocols. The telnet could just do FTP -or http operations (for which we have test servers). +Extend the test suite to include more protocols. The telnet could do FTP or +http operations (for which we have test servers). ## more platforms supported diff --git a/docs/TheArtOfHttpScripting.md b/docs/TheArtOfHttpScripting.md index b6d530fc29..f50ff8f79d 100644 --- a/docs/TheArtOfHttpScripting.md +++ b/docs/TheArtOfHttpScripting.md @@ -61,9 +61,9 @@ receives. Use it like this: ## See the Timing -Many times you may wonder what exactly is taking all the time, or you just -want to know the amount of milliseconds between two points in a transfer. For -those, and other similar situations, the +Many times you may wonder what exactly is taking all the time, or you want to +know the amount of milliseconds between two points in a transfer. For those, +and other similar situations, the [`--trace-time`](https://curl.se/docs/manpage.html#--trace-time) option is what you need. It prepends the time to each trace output line: @@ -145,9 +145,9 @@ to use forms and cookies instead. ## Path part -The path part is just sent off to the server to request that it sends back -the associated response. The path is what is to the right side of the slash -that follows the hostname and possibly port number. +The path part is sent off to the server to request that it sends back the +associated response. The path is what is to the right side of the slash that +follows the hostname and possibly port number. # Fetch a page @@ -182,9 +182,8 @@ actual body in the HEAD response. ## Multiple URLs in a single command line A single curl command line may involve one or many URLs. The most common case -is probably to just use one, but you can specify any amount of URLs. Yes any. -No limits. You then get requests repeated over and over for all the given -URLs. +is probably to use one, but you can specify any amount of URLs. Yes any. No +limits. You then get requests repeated over and over for all the given URLs. Example, send two GET requests: @@ -232,7 +231,7 @@ entered address on a map or using the info as a login-prompt verifying that the user is allowed to see what it is about to see. Of course there has to be some kind of program on the server end to receive -the data you send. You cannot just invent something out of the air. +the data you send. You cannot invent something out of the air. ## GET @@ -257,8 +256,7 @@ the second page you get becomes Most search engines work this way. -To make curl do the GET form post for you, just enter the expected created -URL: +To make curl do the GET form post for you, enter the expected created URL: curl "https://www.example.com/when/junk.cgi?birthyear=1905&press=OK" @@ -328,8 +326,8 @@ To post to a form like this with curl, you enter a command line like: A common way for HTML based applications to pass state information between pages is to add hidden fields to the forms. Hidden fields are already filled -in, they are not displayed to the user and they get passed along just as all -the other fields. +in, they are not displayed to the user and they get passed along as all the +other fields. A similar example form with one visible field, one hidden field and one submit button could look like: @@ -498,11 +496,11 @@ JavaScript to do it. ## Cookie Basics -The way the web browsers do "client side state control" is by using -cookies. Cookies are just names with associated contents. The cookies are -sent to the client by the server. The server tells the client for what path -and hostname it wants the cookie sent back, and it also sends an expiration -date and a few more properties. +The way the web browsers do "client side state control" is by using cookies. +Cookies are names with associated contents. The cookies are sent to the client +by the server. The server tells the client for what path and hostname it wants +the cookie sent back, and it also sends an expiration date and a few more +properties. When a client communicates with a server with a name and path as previously specified in a received cookie, the client sends back the cookies and their @@ -646,9 +644,9 @@ body etc. ## Some login tricks -While not strictly just HTTP related, it still causes a lot of people -problems so here's the executive run-down of how the vast majority of all -login forms work and how to login to them using curl. +While not strictly HTTP related, it still causes a lot of people problems so +here's the executive run-down of how the vast majority of all login forms work +and how to login to them using curl. It can also be noted that to do this properly in an automated fashion, you most certainly need to script things and do multiple curl invokes etc. diff --git a/docs/VULN-DISCLOSURE-POLICY.md b/docs/VULN-DISCLOSURE-POLICY.md index e6562bc1a2..2ea6346fd9 100644 --- a/docs/VULN-DISCLOSURE-POLICY.md +++ b/docs/VULN-DISCLOSURE-POLICY.md @@ -201,8 +201,8 @@ This is an incomplete list of issues that are not considered vulnerabilities. We do not consider a small memory leak a security problem; even if the amount of allocated memory grows by a small amount every now and then. Long-living applications and services already need to have countermeasures and deal with -growing memory usage, be it leaks or just increased use. A small memory or -resource leak is then expected to *not* cause a security problem. +growing memory usage, be it leaks or increased use. A small memory or resource +leak is then expected to *not* cause a security problem. Of course there can be a discussion if a leak is small or not. A large leak can be considered a security problem due to the DOS risk. If leaked memory @@ -293,9 +293,8 @@ same directory where curl is directed to save files. A creative, misleading or funny looking command line is not a security problem. The curl command line tool takes options and URLs on the command line and if an attacker can trick the user to run a specifically crafted curl -command line, all bets are off. Such an attacker can just as well have the -user run a much worse command that can do something fatal (like -`sudo rm -rf /`). +command line, all bets are off. Such an attacker can already have the user run +a much worse command that can do something fatal (like `sudo rm -rf /`). ## Terminal output and escape sequences @@ -414,9 +413,9 @@ roles: It is likely that our [BDFL](https://en.wikipedia.org/wiki/Benevolent_dictator_for_life) occupies one of these roles, though this plan does not depend on it. -A declaration may also contain more detailed information but as we honor embargoes -and vulnerability disclosure throughout this process, it may also just contain -brief notification that a **major incident** is occurring. +A declaration may also contain more detailed information but as we honor +embargoes and vulnerability disclosure throughout this process, it may also +contain a brief notification that a **major incident** is occurring. ## Major incident ongoing diff --git a/docs/cmdline-opts/alt-svc.md b/docs/cmdline-opts/alt-svc.md index a3b17d04f1..fe2e8736fa 100644 --- a/docs/cmdline-opts/alt-svc.md +++ b/docs/cmdline-opts/alt-svc.md @@ -21,7 +21,7 @@ Enable the alt-svc parser. If the filename points to an existing alt-svc cache file, that gets used. After a completed transfer, the cache is saved to the filename again if it has been modified. -Specify a "" filename (zero length) to avoid loading/saving and make curl just +Specify a "" filename (zero length) to avoid loading/saving and make curl handle the cache in memory. You may want to restrict your umask to prevent other users on the same system diff --git a/docs/cmdline-opts/data-ascii.md b/docs/cmdline-opts/data-ascii.md index 5763d81f19..c1d9d75bbd 100644 --- a/docs/cmdline-opts/data-ascii.md +++ b/docs/cmdline-opts/data-ascii.md @@ -18,4 +18,4 @@ Example: # `--data-ascii` -This option is just an alias for --data. +This option is an alias for --data. diff --git a/docs/cmdline-opts/data-urlencode.md b/docs/cmdline-opts/data-urlencode.md index b4680e61ac..36fdf3df2f 100644 --- a/docs/cmdline-opts/data-urlencode.md +++ b/docs/cmdline-opts/data-urlencode.md @@ -28,9 +28,9 @@ a separator and a content specification. The \ part can be passed to curl using one of the following syntaxes: ## content -URL-encode the content and pass that on. Just be careful so that the content -does not contain any `=` or `@` symbols, as that makes the syntax match one of -the other cases below. +URL-encode the content and pass that on. Be careful so that the content does +not contain any `=` or `@` symbols, as that makes the syntax match one of the +other cases below. ## =content URL-encode the content and pass that on. The preceding `=` symbol is not diff --git a/docs/cmdline-opts/form.md b/docs/cmdline-opts/form.md index abe4fa998b..87b019604f 100644 --- a/docs/cmdline-opts/form.md +++ b/docs/cmdline-opts/form.md @@ -28,11 +28,11 @@ For SMTP and IMAP protocols, this composes a multipart mail message to transmit. This enables uploading of binary files etc. To force the 'content' part to be -a file, prefix the filename with an @ sign. To just get the content part from -a file, prefix the filename with the symbol \<. The difference between @ and -\< is then that @ makes a file get attached in the post as a file upload, -while the \< makes a text field and just gets the contents for that text field -from a file. +a file, prefix the filename with an @ sign. To get the content part from a +file, prefix the filename with the symbol \<. The difference between @ and \< +is then that @ makes a file get attached in the post as a file upload, while +the \< makes a text field and gets the contents for that text field from a +file. Read content from stdin instead of a file by using a single "-" as filename. This goes for both @ and \< constructs. When stdin is used, the contents is diff --git a/docs/cmdline-opts/hsts.md b/docs/cmdline-opts/hsts.md index f58566e95d..bb1f1d2737 100644 --- a/docs/cmdline-opts/hsts.md +++ b/docs/cmdline-opts/hsts.md @@ -25,7 +25,7 @@ in the HSTS cache, it upgrades the transfer to use HTTPS. Each HSTS cache entry has an individual lifetime after which the upgrade is no longer performed. -Specify a "" filename (zero length) to avoid loading/saving and make curl just +Specify a "" filename (zero length) to avoid loading/saving and make curl handle HSTS in memory. You may want to restrict your umask to prevent other users on the same system diff --git a/docs/cmdline-opts/list-only.md b/docs/cmdline-opts/list-only.md index eb52d88849..36d6321039 100644 --- a/docs/cmdline-opts/list-only.md +++ b/docs/cmdline-opts/list-only.md @@ -29,7 +29,7 @@ include subdirectories and symbolic links. When listing an SFTP directory, this switch forces a name-only view, one per line. This is especially useful if the user wants to machine-parse the contents of an SFTP directory since the normal directory view provides more -information than just filenames. +information than filenames. When retrieving a specific email from POP3, this switch forces a LIST command to be performed instead of RETR. This is particularly useful if the user wants diff --git a/docs/cmdline-opts/output.md b/docs/cmdline-opts/output.md index 6d68b33575..0c4f7f9fac 100644 --- a/docs/cmdline-opts/output.md +++ b/docs/cmdline-opts/output.md @@ -40,7 +40,7 @@ this: curl -o aa example.com -o bb example.net -and the order of the -o options and the URLs does not matter, just that the +and the order of the -o options and the URLs does not matter, only that the first -o is for the first URL and so on, so the above command line can also be written as diff --git a/docs/cmdline-opts/quote.md b/docs/cmdline-opts/quote.md index a5563010c6..1ac2076b74 100644 --- a/docs/cmdline-opts/quote.md +++ b/docs/cmdline-opts/quote.md @@ -18,13 +18,13 @@ Example: # `--quote` Send an arbitrary command to the remote FTP or SFTP server. Quote commands are -sent BEFORE the transfer takes place (just after the initial **PWD** command -in an FTP transfer, to be exact). To make commands take place after a +sent BEFORE the transfer takes place (immediately after the initial **PWD** +command in an FTP transfer, to be exact). To make commands take place after a successful transfer, prefix them with a dash '-'. (FTP only) To make commands be sent after curl has changed the working -directory, just before the file transfer command(s), prefix the command with a -'+'. +directory, immediately before the file transfer command(s), prefix the command +with a '+'. You may specify any number of commands. diff --git a/docs/cmdline-opts/skip-existing.md b/docs/cmdline-opts/skip-existing.md index cfb7c2f953..dbef2fae92 100644 --- a/docs/cmdline-opts/skip-existing.md +++ b/docs/cmdline-opts/skip-existing.md @@ -18,5 +18,5 @@ Example: If there is a local file present when a download is requested, the operation is skipped. Note that curl cannot know if the local file was previously -downloaded fine, or if it is incomplete etc, it just knows if there is a -filename present in the file system or not and it skips the transfer if it is. +downloaded fine, or if it is incomplete etc, it knows if there is a filename +present in the file system or not and it skips the transfer if it is. diff --git a/docs/cmdline-opts/write-out.md b/docs/cmdline-opts/write-out.md index 5b8fdb3e47..3b84ff9961 100644 --- a/docs/cmdline-opts/write-out.md +++ b/docs/cmdline-opts/write-out.md @@ -25,9 +25,8 @@ from stdin you write "@-". The variables present in the output format are substituted by the value or text that curl thinks fit, as described below. All variables are specified as -%{variable_name} and to output a normal % you just write them as %%. You can -output a newline by using \n, a carriage return with \r and a tab space with -\t. +%{variable_name} and to output a normal % you write them as %%. You can output +a newline by using \n, a carriage return with \r and a tab space with \t. The output is by default written to standard output, but can be changed with %{stderr} and %output{}. @@ -249,9 +248,9 @@ The time, in seconds, it took from the start until the last byte is sent by libcurl. (Added in 8.10.0) ## `time_pretransfer` -The time, in seconds, it took from the start until the file transfer was just -about to begin. This includes all pre-transfer commands and negotiations that -are specific to the particular protocol(s) involved. +The time, in seconds, it took from the start until immediately before the file +transfer was about to begin. This includes all pre-transfer commands and +negotiations that are specific to the particular protocol(s) involved. ## `time_queue` The time, in seconds, the transfer was queued during its run. This adds diff --git a/docs/examples/README.md b/docs/examples/README.md index a6a31c9388..06d07be0f3 100644 --- a/docs/examples/README.md +++ b/docs/examples/README.md @@ -16,8 +16,7 @@ them for submission in future packages and on the website. ## Building The `Makefile.example` is an example Makefile that could be used to build -these examples. Just edit the file according to your system and requirements -first. +these examples. Edit the file according to your system and requirements first. Most examples should build fine using a command line like this: diff --git a/docs/examples/adddocsref.pl b/docs/examples/adddocsref.pl index aba9abe2a0..5fe09ba40d 100755 --- a/docs/examples/adddocsref.pl +++ b/docs/examples/adddocsref.pl @@ -36,7 +36,7 @@ for my $f (@ARGV) { while() { my $l = $_; if($l =~ /\/* $docroot/) { - # just ignore preciously added refs + # ignore preciously added refs } elsif($l =~ /^( *).*curl_easy_setopt\([^,]*, *([^ ,]*) *,/) { my ($prefix, $anchor) = ($1, $2); diff --git a/docs/examples/cacertinmem.c b/docs/examples/cacertinmem.c index cd5013c30d..04a61d30df 100644 --- a/docs/examples/cacertinmem.c +++ b/docs/examples/cacertinmem.c @@ -164,7 +164,7 @@ int main(void) /* second try: retrieve page using cacerts' certificate -> succeeds to * load the certificate by installing a function doing the necessary - * "modifications" to the SSL CONTEXT just before link init + * "modifications" to the SSL CONTEXT before link init */ curl_easy_setopt(curl, CURLOPT_SSL_CTX_FUNCTION, sslctx_function); result = curl_easy_perform(curl); diff --git a/docs/examples/ftpupload.c b/docs/examples/ftpupload.c index 5b6ed7893e..415bafe68e 100644 --- a/docs/examples/ftpupload.c +++ b/docs/examples/ftpupload.c @@ -22,8 +22,7 @@ * ***************************************************************************/ /* - * Performs an FTP upload and renames the file just after a successful - * transfer. + * Performs an FTP upload and renames the file after a successful transfer. * */ #ifdef _MSC_VER diff --git a/docs/examples/ghiper.c b/docs/examples/ghiper.c index 7d79a973aa..aa41ef041d 100644 --- a/docs/examples/ghiper.c +++ b/docs/examples/ghiper.c @@ -177,7 +177,7 @@ static int update_timeout_cb(CURLM *multi, long timeout_ms, void *userp) timeout_ms, timeout.tv_sec, timeout.tv_usec); /* - * if timeout_ms is -1, just delete the timer + * if timeout_ms is -1, delete the timer * * For other values of timeout_ms, this should set or *update* the timer to * the new value diff --git a/docs/examples/headerapi.c b/docs/examples/headerapi.c index 61f2eb98e8..fed3af5b7b 100644 --- a/docs/examples/headerapi.c +++ b/docs/examples/headerapi.c @@ -52,7 +52,7 @@ int main(void) /* example.com is redirected, so we tell libcurl to follow redirection */ curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, 1L); - /* this example just ignores the content */ + /* this example ignores the content */ curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, write_cb); /* Perform the request, result gets the return code */ diff --git a/docs/examples/hiperfifo.c b/docs/examples/hiperfifo.c index 109addef76..7d7c547036 100644 --- a/docs/examples/hiperfifo.c +++ b/docs/examples/hiperfifo.c @@ -153,7 +153,7 @@ static int multi_timer_cb(CURLM *multi, long timeout_ms, struct GlobalInfo *g) fprintf(MSG_OUT, "multi_timer_cb: Setting timeout to %ld ms\n", timeout_ms); /* - * if timeout_ms is -1, just delete the timer + * if timeout_ms is -1, delete the timer * * For all other values of timeout_ms, this should set or *update* the timer * to the new value diff --git a/docs/examples/http-post.c b/docs/examples/http-post.c index 8f8be06748..04a755e4aa 100644 --- a/docs/examples/http-post.c +++ b/docs/examples/http-post.c @@ -43,8 +43,7 @@ int main(void) curl = curl_easy_init(); if(curl) { /* First set the URL that is about to receive our POST. This URL can - just as well be an https:// URL if that is what should receive the - data. */ + be an https:// URL if that is what should receive the data. */ curl_easy_setopt(curl, CURLOPT_URL, "http://postit.example.com/moo.cgi"); /* Now specify the POST data */ curl_easy_setopt(curl, CURLOPT_POSTFIELDS, "name=daniel&project=curl"); diff --git a/docs/examples/httpput.c b/docs/examples/httpput.c index 58be20151e..977d31ccaf 100644 --- a/docs/examples/httpput.c +++ b/docs/examples/httpput.c @@ -89,9 +89,8 @@ int main(int argc, const char **argv) file = argv[1]; url = argv[2]; - /* get a FILE * of the same file, could also be made with - fdopen() from the previous descriptor, but hey this is just - an example! */ + /* get a FILE * of the same file, could also be made with fdopen() from the + previous descriptor, but hey this is an example! */ hd_src = fopen(file, "rb"); if(!hd_src) return 2; diff --git a/docs/examples/imap-append.c b/docs/examples/imap-append.c index 544473841c..77cf2bc02a 100644 --- a/docs/examples/imap-append.c +++ b/docs/examples/imap-append.c @@ -105,8 +105,8 @@ int main(void) curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com/Sent"); /* In this case, we are using a callback function to specify the data. You - * could just use the CURLOPT_READDATA option to specify a FILE pointer to - * read from. */ + * could use the CURLOPT_READDATA option to specify a FILE pointer to read + * from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); diff --git a/docs/examples/imap-create.c b/docs/examples/imap-create.c index f1eadb5860..0572171152 100644 --- a/docs/examples/imap-create.c +++ b/docs/examples/imap-create.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the CREATE command specifying the new folder name */ diff --git a/docs/examples/imap-delete.c b/docs/examples/imap-delete.c index 9d33f6fa6b..16b7d53941 100644 --- a/docs/examples/imap-delete.c +++ b/docs/examples/imap-delete.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the DELETE command specifying the existing folder */ diff --git a/docs/examples/imap-examine.c b/docs/examples/imap-examine.c index 083708b6c8..91903192b4 100644 --- a/docs/examples/imap-examine.c +++ b/docs/examples/imap-examine.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the EXAMINE command specifying the mailbox folder */ diff --git a/docs/examples/imap-lsub.c b/docs/examples/imap-lsub.c index 7ffeba04f7..123d9d35c5 100644 --- a/docs/examples/imap-lsub.c +++ b/docs/examples/imap-lsub.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the LSUB command. Note the syntax is similar to that of a LIST diff --git a/docs/examples/imap-noop.c b/docs/examples/imap-noop.c index 5b53c03d48..5d8153e931 100644 --- a/docs/examples/imap-noop.c +++ b/docs/examples/imap-noop.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "imap://imap.example.com"); /* Set the NOOP command */ diff --git a/docs/examples/pop3-noop.c b/docs/examples/pop3-noop.c index 4b1b050182..c8352df402 100644 --- a/docs/examples/pop3-noop.c +++ b/docs/examples/pop3-noop.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com"); /* Set the NOOP command */ diff --git a/docs/examples/pop3-stat.c b/docs/examples/pop3-stat.c index 4f16546c2b..6c2d3646f4 100644 --- a/docs/examples/pop3-stat.c +++ b/docs/examples/pop3-stat.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com"); /* Set the STAT command */ diff --git a/docs/examples/pop3-top.c b/docs/examples/pop3-top.c index 177417fde6..bb23eb99b9 100644 --- a/docs/examples/pop3-top.c +++ b/docs/examples/pop3-top.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com"); /* Set the TOP command for message 1 to only include the headers */ diff --git a/docs/examples/pop3-uidl.c b/docs/examples/pop3-uidl.c index 98f4a3f14a..fb211093d8 100644 --- a/docs/examples/pop3-uidl.c +++ b/docs/examples/pop3-uidl.c @@ -49,7 +49,7 @@ int main(void) curl_easy_setopt(curl, CURLOPT_USERNAME, "user"); curl_easy_setopt(curl, CURLOPT_PASSWORD, "secret"); - /* This is just the server URL */ + /* This is the server URL */ curl_easy_setopt(curl, CURLOPT_URL, "pop3://pop.example.com"); /* Set the UIDL command */ diff --git a/docs/examples/smooth-gtk-thread.c b/docs/examples/smooth-gtk-thread.c index 3c2f259048..06eea1ff0e 100644 --- a/docs/examples/smooth-gtk-thread.c +++ b/docs/examples/smooth-gtk-thread.c @@ -46,7 +46,7 @@ static pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER; static int j = 0; -static gint num_urls = 9; /* Just make sure this is less than urls[] */ +static gint num_urls = 9; /* make sure this is less than urls[] */ static const char * const urls[] = { "90022", "90023", diff --git a/docs/examples/smtp-authzid.c b/docs/examples/smtp-authzid.c index e06946869f..fe91ba5e63 100644 --- a/docs/examples/smtp-authzid.c +++ b/docs/examples/smtp-authzid.c @@ -129,8 +129,8 @@ int main(void) recipients = curl_slist_append(recipients, TO_ADDR); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); - /* We are using a callback function to specify the payload (the headers and - * body of the message). You could just use the CURLOPT_READDATA option to + /* We are using a callback function to specify the payload (the headers + * and body of the message). You can use the CURLOPT_READDATA option to * specify a FILE pointer to read from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); diff --git a/docs/examples/smtp-mail.c b/docs/examples/smtp-mail.c index d4a73e0690..b1590fd3af 100644 --- a/docs/examples/smtp-mail.c +++ b/docs/examples/smtp-mail.c @@ -117,8 +117,8 @@ int main(void) recipients = curl_slist_append(recipients, CC_ADDR); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); - /* We are using a callback function to specify the payload (the headers and - * body of the message). You could just use the CURLOPT_READDATA option to + /* We are using a callback function to specify the payload (the headers + * and body of the message). You can use the CURLOPT_READDATA option to * specify a FILE pointer to read from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); diff --git a/docs/examples/smtp-multi.c b/docs/examples/smtp-multi.c index 2cfdbb5793..86545a1ef3 100644 --- a/docs/examples/smtp-multi.c +++ b/docs/examples/smtp-multi.c @@ -116,8 +116,8 @@ int main(void) curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); /* We are using a callback function to specify the payload (the headers - * and body of the message). You could just use the CURLOPT_READDATA - * option to specify a FILE pointer to read from. */ + * and body of the message). You can use the CURLOPT_READDATA option to + * specify a FILE pointer to read from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); curl_easy_setopt(curl, CURLOPT_UPLOAD, 1L); diff --git a/docs/examples/smtp-ssl.c b/docs/examples/smtp-ssl.c index 73c8b6a708..9cac9ecd52 100644 --- a/docs/examples/smtp-ssl.c +++ b/docs/examples/smtp-ssl.c @@ -139,8 +139,8 @@ int main(void) recipients = curl_slist_append(recipients, CC_MAIL); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); - /* We are using a callback function to specify the payload (the headers and - * body of the message). You could just use the CURLOPT_READDATA option to + /* We are using a callback function to specify the payload (the headers + * and body of the message). You can use the CURLOPT_READDATA option to * specify a FILE pointer to read from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); diff --git a/docs/examples/smtp-tls.c b/docs/examples/smtp-tls.c index f5cafaeb81..da452318a6 100644 --- a/docs/examples/smtp-tls.c +++ b/docs/examples/smtp-tls.c @@ -141,8 +141,8 @@ int main(void) recipients = curl_slist_append(recipients, CC_MAIL); curl_easy_setopt(curl, CURLOPT_MAIL_RCPT, recipients); - /* We are using a callback function to specify the payload (the headers and - * body of the message). You could just use the CURLOPT_READDATA option to + /* We are using a callback function to specify the payload (the headers + * and body of the message). You can use the CURLOPT_READDATA option to * specify a FILE pointer to read from. */ curl_easy_setopt(curl, CURLOPT_READFUNCTION, read_cb); curl_easy_setopt(curl, CURLOPT_READDATA, &upload_ctx); diff --git a/docs/examples/usercertinmem.c b/docs/examples/usercertinmem.c index 1b6694abb5..03dd148276 100644 --- a/docs/examples/usercertinmem.c +++ b/docs/examples/usercertinmem.c @@ -172,8 +172,8 @@ int main(void) printf("*** transfer failed ***\n"); /* second try: retrieve page using user certificate and key -> succeeds to - * load the certificate and key by installing a function doing - * the necessary "modifications" to the SSL CONTEXT just before link init + * load the certificate and key by installing a function doing the + * necessary "modifications" to the SSL CONTEXT before link init */ curl_easy_setopt(curl, CURLOPT_SSL_CTX_FUNCTION, sslctx_function); result = curl_easy_perform(curl); diff --git a/docs/internals/CHECKSRC.md b/docs/internals/CHECKSRC.md index 091a4d3107..32bc607338 100644 --- a/docs/internals/CHECKSRC.md +++ b/docs/internals/CHECKSRC.md @@ -172,8 +172,8 @@ This ignores the warning for overly long lines until it is re-enabled with: If the enabling is not performed before the end of the file, it is enabled again automatically for the next file. -You can also opt to ignore just N violations so that if you have a single long -line you just cannot shorten and is agreed to be fine anyway: +You can also opt to ignore N violations so that if you have a single long line +you cannot shorten and is agreed to be fine anyway: /* !checksrc! disable LONGLINE 1 */ diff --git a/docs/internals/CLIENT-WRITERS.md b/docs/internals/CLIENT-WRITERS.md index 0f2ff9ffdb..e00cd263be 100644 --- a/docs/internals/CLIENT-WRITERS.md +++ b/docs/internals/CLIENT-WRITERS.md @@ -100,7 +100,7 @@ typedef enum { If a writer for phase `PROTOCOL` is added to the chain, it is always added *after* any `RAW` or `TRANSFER_DECODE` and *before* any `CONTENT_DECODE` and `CLIENT` phase writer. If there is already a writer for the same phase -present, the new writer is inserted just before that one. +present, the new writer is inserted before that one. All transfers have a chain of 3 writers by default. A specific protocol handler may alter that by adding additional writers. The 3 standard writers diff --git a/docs/internals/CODE_STYLE.md b/docs/internals/CODE_STYLE.md index 3f0d5a4f2b..4a947e03dd 100644 --- a/docs/internals/CODE_STYLE.md +++ b/docs/internals/CODE_STYLE.md @@ -19,9 +19,9 @@ Our C code has a few style rules. Most of them are verified and upheld by the by the build system when built after `./configure --enable-debug` has been used. -It is normally not a problem for anyone to follow the guidelines, as you just -need to copy the style already used in the source code and there are no -particularly unusual rules in our set of rules. +It is normally not a problem for anyone to follow the guidelines, simply copy +the style already used in the source code and there are no particularly +unusual rules in our set of rules. We also work hard on writing code that are warning-free on all the major platforms and in general on as many platforms as possible. Code that causes @@ -39,7 +39,7 @@ understand it when debugging. Try using a non-confusing naming scheme for your new functions and variable names. It does not necessarily have to mean that you should use the same as in -other places of the code, just that the names should be logical, +other places of the code, only that the names should be logical, understandable and be named according to what they are used for. File-local functions should be made static. We like lower case names. diff --git a/docs/internals/CONNECTION-FILTERS.md b/docs/internals/CONNECTION-FILTERS.md index 93bfa334a9..15676d0511 100644 --- a/docs/internals/CONNECTION-FILTERS.md +++ b/docs/internals/CONNECTION-FILTERS.md @@ -41,8 +41,7 @@ Curl_write(data, buffer) While connection filters all do different things, they look the same from the "outside". The code in `data` and `conn` does not really know **which** -filters are installed. `conn` just writes into the first filter, whatever that -is. +filters are installed. `conn` writes into the first filter, whatever that is. Same is true for filters. Each filter has a pointer to the `next` filter. When SSL has encrypted the data, it does not write to a socket, it writes to the @@ -135,7 +134,7 @@ do *not* use an http proxy, or socks, or https is lower. As to transfer efficiency, writing and reading through a filter comes at near zero cost *if the filter does not transform the data*. An http proxy or socks -filter, once it is connected, just passes the calls through. Those filters +filter, once it is connected, passes the calls through. Those filters implementations look like this: ```c @@ -277,7 +276,7 @@ connect (in time), it is torn down and another one is created for the next address. This keeps the `TCP` filter simple. The `HAPPY-EYEBALLS` on the other hand stays focused on its side of the -problem. We can use it also to make other type of connection by just giving it +problem. We can use it also to make other type of connection by giving it another filter type to try to have happy eyeballing for QUIC: ``` diff --git a/docs/internals/LLIST.md b/docs/internals/LLIST.md index b9e192c6f4..fadc9f1a3d 100644 --- a/docs/internals/LLIST.md +++ b/docs/internals/LLIST.md @@ -20,8 +20,8 @@ of `llist.c`). Use the functions. initialized with a call to `Curl_llist_init()` before it can be used To clean up a list, call `Curl_llist_destroy()`. Since the linked lists -themselves do not allocate memory, it can also be fine to just *not* clean up -the list. +themselves do not allocate memory, it can also be fine to *not* clean up the +list. ## Add a node diff --git a/docs/internals/MULTI-EV.md b/docs/internals/MULTI-EV.md index 745955d5b6..9bea1eb367 100644 --- a/docs/internals/MULTI-EV.md +++ b/docs/internals/MULTI-EV.md @@ -117,10 +117,10 @@ those). ### And Come Again -While transfer and connection identifier are practically unique in a -libcurl application, sockets are not. Operating systems are keen on reusing -their resources, and the next socket may get the same identifier as -one just having been closed with high likelihood. +While transfer and connection identifiers are practically unique in a libcurl +application, sockets are not. Operating systems are keen on reusing their +resources, and the next socket may get the same identifier as a recently +closed one with high likelihood. This means that multi event handling needs to be informed *before* a close, clean up all its tracking and be ready to see that same socket identifier diff --git a/docs/internals/NEW-PROTOCOL.md b/docs/internals/NEW-PROTOCOL.md index 35beba6edb..832f8e843d 100644 --- a/docs/internals/NEW-PROTOCOL.md +++ b/docs/internals/NEW-PROTOCOL.md @@ -57,9 +57,9 @@ There should be a documented URL format. If there is an RFC for it there is no question about it but the syntax does not have to be a published RFC. It could be enough if it is already in use by other implementations. -If you make up the syntax just in order to be able to propose it to curl, then -you are in a bad place. URLs are designed and defined for interoperability. -There should at least be a good chance that other clients and servers can be +If you make up the syntax in order to be able to propose it to curl, then you +are in a bad place. URLs are designed and defined for interoperability. There +should at least be a good chance that other clients and servers can be implemented supporting the same URL syntax and work the same or similar way. URLs work on registered 'schemes'. There is a register of [all officially @@ -91,8 +91,8 @@ to curl and immediately once the code had been merged, the originator vanished from the face of the earth. That is fine, but we need to take the necessary precautions so when it happens we are still fine. -Our test infrastructure is powerful enough to test just about every possible -protocol - but it might require a bit of an effort to make it happen. +Our test infrastructure is powerful enough to test almost every protocol - but +it might require a bit of an effort to make it happen. ## Documentation diff --git a/docs/internals/SCORECARD.md b/docs/internals/SCORECARD.md index 019ddc5963..049e1567b6 100644 --- a/docs/internals/SCORECARD.md +++ b/docs/internals/SCORECARD.md @@ -43,7 +43,7 @@ curl> python3 tests/http/scorecard.py -h Apart from `-d/--downloads` there is `-u/--uploads` and `-r/--requests`. These are run with a variation of resource sizes and parallelism by default. You can -specify these in some way if you are just interested in a particular case. +specify these in some way if you are interested in a particular case. For example, to run downloads of a 1 MB resource only, 100 times with at max 6 parallel transfers, use: diff --git a/docs/internals/STRPARSE.md b/docs/internals/STRPARSE.md index 7d1a3a402f..8d0e8ba515 100644 --- a/docs/internals/STRPARSE.md +++ b/docs/internals/STRPARSE.md @@ -161,7 +161,7 @@ string. int curlx_str_number(char **linep, curl_size_t *nump, size_t max); ~~~ -Get an unsigned decimal number not larger than `max`. Leading zeroes are just +Get an unsigned decimal number not larger than `max`. Leading zeroes are swallowed. Return non-zero on error. Returns error if there was not a single digit. @@ -181,8 +181,8 @@ int curlx_str_hex(char **linep, curl_size_t *nump, size_t max); ~~~ Get an unsigned hexadecimal number not larger than `max`. Leading zeroes are -just swallowed. Return non-zero on error. Returns error if there was not a -single digit. Does *not* handled `0x` prefix. +swallowed. Return non-zero on error. Returns error if there was not a single +digit. Does *not* handled `0x` prefix. ## `curlx_str_octal` @@ -190,7 +190,7 @@ single digit. Does *not* handled `0x` prefix. int curlx_str_octal(char **linep, curl_size_t *nump, size_t max); ~~~ -Get an unsigned octal number not larger than `max`. Leading zeroes are just +Get an unsigned octal number not larger than `max`. Leading zeroes are swallowed. Return non-zero on error. Returns error if there was not a single digit. diff --git a/docs/internals/TLS-SESSIONS.md b/docs/internals/TLS-SESSIONS.md index add735bb03..4defced2a3 100644 --- a/docs/internals/TLS-SESSIONS.md +++ b/docs/internals/TLS-SESSIONS.md @@ -52,8 +52,8 @@ Examples: same as the previous, except it is configured to use TLSv1.2 as min and max versions. -Different configurations produce different keys which is just what -curl needs when handling SSL session tickets. +Different configurations produce different keys which is what curl needs when +handling SSL session tickets. One important thing: peer keys do not contain confidential information. If you configure a client certificate or SRP authentication with username/password, @@ -121,8 +121,8 @@ concurrent connections do not reuse the same ticket. #### Privacy and Security As mentioned above, ssl peer keys are not intended for storage in a file -system. They clearly show which hosts the user talked to. This maybe "just" -privacy relevant, but has security implications as an attacker might find +system. They clearly show which hosts the user talked to. This is not only +privacy relevant, but also has security implications as an attacker might find worthy targets among your peer keys. Also, we do not recommend to persist TLSv1.2 tickets. @@ -138,11 +138,11 @@ The salt is generated randomly for each peer key on export. The SHA256 makes sure that the peer key cannot be reversed and that a slightly different key still produces a different result. -This means an attacker cannot just "grep" a session file for a particular -entry, e.g. if they want to know if you accessed a specific host. They *can* -however compute the SHA256 hashes for all salts in the file and find a -specific entry. They *cannot* find a hostname they do not know. They would -have to brute force by guessing. +This means an attacker cannot "grep" a session file for a particular entry, +e.g. if they want to know if you accessed a specific host. They *can* however +compute the SHA256 hashes for all salts in the file and find a specific entry. +They *cannot* find a hostname they do not know. They would have to brute force +by guessing. #### Import diff --git a/docs/libcurl/ABI.md b/docs/libcurl/ABI.md index b3d9d80475..9cd2ad801b 100644 --- a/docs/libcurl/ABI.md +++ b/docs/libcurl/ABI.md @@ -15,7 +15,7 @@ sizes/defines and more. ## Upgrades A libcurl upgrade does not break the ABI or change established and documented -behavior. Your application can remain using libcurl just as before, only with +behavior. Your application can remain using libcurl like before, only with fewer bugs and possibly with added new features. ## Version Numbers diff --git a/docs/libcurl/curl_easy_escape.md b/docs/libcurl/curl_easy_escape.md index 262bf131a8..6ff7c5ac53 100644 --- a/docs/libcurl/curl_easy_escape.md +++ b/docs/libcurl/curl_easy_escape.md @@ -52,7 +52,7 @@ to the function is encoded correctly. # URLs URLs are by definition *URL encoded*. To create a proper URL from a set of -components that may not be URL encoded already, you cannot just URL encode the +components that may not be URL encoded already, you cannot URL encode the entire URL string with curl_easy_escape(3), because it then also converts colons, slashes and other symbols that you probably want untouched. diff --git a/docs/libcurl/curl_easy_getinfo.md b/docs/libcurl/curl_easy_getinfo.md index 783d01b43a..9adf085d10 100644 --- a/docs/libcurl/curl_easy_getinfo.md +++ b/docs/libcurl/curl_easy_getinfo.md @@ -196,16 +196,15 @@ In microseconds. (Added in 8.10.0) See CURLINFO_POSTTRANSFER_TIME_T(3) ## CURLINFO_PRETRANSFER_TIME -The time it took from the start until the file transfer is just about to -begin. This includes all pre-transfer commands and negotiations that are -specific to the particular protocol(s) involved. See -CURLINFO_PRETRANSFER_TIME(3) +The time it took from the start until the file transfer is about to begin. +This includes all pre-transfer commands and negotiations that are specific to +the particular protocol(s) involved. See CURLINFO_PRETRANSFER_TIME(3) ## CURLINFO_PRETRANSFER_TIME_T -The time it took from the start until the file transfer is just about to -begin. This includes all pre-transfer commands and negotiations that are -specific to the particular protocol(s) involved. In microseconds. See +The time it took from the start until the file transfer is about to begin. +This includes all pre-transfer commands and negotiations that are specific to +the particular protocol(s) involved. In microseconds. See CURLINFO_PRETRANSFER_TIME_T(3) ## CURLINFO_PRIMARY_IP diff --git a/docs/libcurl/curl_easy_reset.md b/docs/libcurl/curl_easy_reset.md index 979419de63..f27e86cbb6 100644 --- a/docs/libcurl/curl_easy_reset.md +++ b/docs/libcurl/curl_easy_reset.md @@ -30,7 +30,7 @@ void curl_easy_reset(CURL *handle); Re-initializes all options previously set on a specified curl handle to the default values. This puts back the handle to the same state as it was in when -it was just created with curl_easy_init(3). +it was created with curl_easy_init(3). It does not change the following information kept in the handle: live connections, the Session ID cache, the DNS cache, the cookies, the shares or diff --git a/docs/libcurl/curl_easy_setopt.md b/docs/libcurl/curl_easy_setopt.md index 771442b185..13d966f3c3 100644 --- a/docs/libcurl/curl_easy_setopt.md +++ b/docs/libcurl/curl_easy_setopt.md @@ -708,7 +708,7 @@ How to act on redirects after POST. See CURLOPT_POSTREDIR(3) ## CURLOPT_PREQUOTE -Commands to run just before transfer. See CURLOPT_PREQUOTE(3) +Commands to run immediately before transfer. See CURLOPT_PREQUOTE(3) ## CURLOPT_PREREQDATA diff --git a/docs/libcurl/curl_easy_ssls_export.md b/docs/libcurl/curl_easy_ssls_export.md index b4960be1d1..fdefa408e5 100644 --- a/docs/libcurl/curl_easy_ssls_export.md +++ b/docs/libcurl/curl_easy_ssls_export.md @@ -83,8 +83,7 @@ a cryptographic hash of the salt and **session_key**. The salt is generated for every session individually. Storing **shmac** is recommended when placing session tickets in a file, for example. -A third party may brute-force known hostnames, but cannot just "grep" for -them. +A third party may brute-force known hostnames, but cannot "grep" for them. ## Session Data diff --git a/docs/libcurl/curl_global_cleanup.md b/docs/libcurl/curl_global_cleanup.md index 1c3dac9aa3..e6f8e42fd7 100644 --- a/docs/libcurl/curl_global_cleanup.md +++ b/docs/libcurl/curl_global_cleanup.md @@ -37,11 +37,11 @@ curl_version_info(3) has the CURL_VERSION_THREADSAFE feature bit set (most platforms). If this is not thread-safe, you must not call this function when any other -thread in the program (i.e. a thread sharing the same memory) is running. -This does not just mean no other thread that is using libcurl. Because -curl_global_cleanup(3) calls functions of other libraries that are -similarly thread-unsafe, it could conflict with any other thread that uses -these other libraries. +thread in the program (i.e. a thread sharing the same memory) is running. This +does not only mean other threads that use libcurl. Because +curl_global_cleanup(3) calls functions of other libraries that are similarly +thread-unsafe, it could conflict with any other thread that uses these other +libraries. See the description in libcurl(3) of global environment requirements for details of how to use this function. diff --git a/docs/libcurl/curl_global_init.md b/docs/libcurl/curl_global_init.md index 093a530351..3803438ff4 100644 --- a/docs/libcurl/curl_global_init.md +++ b/docs/libcurl/curl_global_init.md @@ -50,10 +50,10 @@ the `threadsafe` feature set (added in 7.84.0). If this is not thread-safe (the bit mentioned above is not set), you must not call this function when any other thread in the program (i.e. a thread sharing -the same memory) is running. This does not just mean no other thread that is -using libcurl. Because curl_global_init(3) calls functions of other libraries -that are similarly thread-unsafe, it could conflict with any other thread that -uses these other libraries. +the same memory) is running. This does not only mean other threads that use +libcurl. Because curl_global_init(3) calls functions of other libraries that +are similarly thread-unsafe, it could conflict with any other thread that uses +these other libraries. If you are initializing libcurl from a Windows DLL you should not initialize it from *DllMain* or a static initializer because Windows holds the loader diff --git a/docs/libcurl/curl_global_sslset.md b/docs/libcurl/curl_global_sslset.md index 217b28816f..8ef0ca9992 100644 --- a/docs/libcurl/curl_global_sslset.md +++ b/docs/libcurl/curl_global_sslset.md @@ -62,7 +62,7 @@ curl_version_info(3) has the CURL_VERSION_THREADSAFE feature bit set If this is not thread-safe, you must not call this function when any other thread in the program (i.e. a thread sharing the same memory) is running. -This does not just mean no other thread that is using libcurl. +This does not only mean no other thread that is using libcurl. # Names @@ -72,7 +72,7 @@ Schannel, wolfSSL The name "OpenSSL" is used for all versions of OpenSSL and its associated forks/flavors in this function. OpenSSL, BoringSSL, LibreSSL, quictls and AmiSSL are all supported by libcurl, but in the eyes of curl_global_sslset(3) -they are all just "OpenSSL". They all mostly provide the same API. +they are all called "OpenSSL". They all mostly provide the same API. curl_version_info(3) can return more specific info about the exact OpenSSL flavor and version number in use. diff --git a/docs/libcurl/curl_global_trace.md b/docs/libcurl/curl_global_trace.md index fa08df7c72..35bf6c6479 100644 --- a/docs/libcurl/curl_global_trace.md +++ b/docs/libcurl/curl_global_trace.md @@ -41,7 +41,7 @@ the CURL_VERSION_THREADSAFE feature bit set (most platforms). If this is not thread-safe, you must not call this function when any other thread in the program (i.e. a thread sharing the same memory) is running. This -does not just mean no other thread that is using libcurl. Because +does not only mean no other thread that is using libcurl. Because curl_global_init(3) may call functions of other libraries that are similarly thread-unsafe, it could conflict with any other thread that uses these other libraries. diff --git a/docs/libcurl/curl_mprintf.md b/docs/libcurl/curl_mprintf.md index 508dece919..72ee0a1f02 100644 --- a/docs/libcurl/curl_mprintf.md +++ b/docs/libcurl/curl_mprintf.md @@ -159,14 +159,14 @@ An optional precision in the form of a period ('.') followed by an optional decimal digit string. Instead of a decimal digit string one may write "*" or "*m$" (for some decimal integer m) to specify that the precision is given in the next argument, or in the *m-th* argument, respectively, which must be of -type int. If the precision is given as just '.', the precision is taken to be -zero. A negative precision is taken as if the precision were omitted. This -gives the minimum number of digits to appear for **d**, **i**, **o**, -**u**, **x**, and **X** conversions, the number of digits to appear -after the radix character for **a**, **A**, **e**, **E**, **f**, and -**F** conversions, the maximum number of significant digits for **g** and -**G** conversions, or the maximum number of characters to be printed from a -string for **s** and **S** conversions. +type int. If the precision is given as a single '.', the precision is taken to +be zero. A negative precision is taken as if the precision were omitted. This +gives the minimum number of digits to appear for **d**, **i**, **o**, **u**, +**x**, and **X** conversions, the number of digits to appear after the radix +character for **a**, **A**, **e**, **E**, **f**, and **F** conversions, the +maximum number of significant digits for **g** and **G** conversions, or the +maximum number of characters to be printed from a string for **s** and **S** +conversions. # Length modifier diff --git a/docs/libcurl/curl_multi_assign.md b/docs/libcurl/curl_multi_assign.md index 6ae60fe667..279965f534 100644 --- a/docs/libcurl/curl_multi_assign.md +++ b/docs/libcurl/curl_multi_assign.md @@ -41,11 +41,6 @@ libcurl only keeps one single pointer associated with a socket, so calling this function several times for the same socket makes the last set pointer get used. -The idea here being that this association (socket to private pointer) is -something that just about every application that uses this API needs and then -libcurl can just as well do it since it already has the necessary -functionality. - It is acceptable to call this function from your multi callback functions. # %PROTOCOLS% diff --git a/docs/libcurl/curl_multi_info_read.md b/docs/libcurl/curl_multi_info_read.md index 27ec408f86..35f96c2db7 100644 --- a/docs/libcurl/curl_multi_info_read.md +++ b/docs/libcurl/curl_multi_info_read.md @@ -27,10 +27,10 @@ CURLMsg *curl_multi_info_read(CURLM *multi_handle, int *msgs_in_queue); # DESCRIPTION -Ask the multi handle if there are any messages from the individual -transfers. Messages may include information such as an error code from the -transfer or just the fact that a transfer is completed. More details on these -should be written down as well. +Ask the multi handle if there are any messages from the individual transfers. +Messages may include information such as an error code from the transfer or +the fact that a transfer is completed. More details on these should be written +down as well. Repeated calls to this function returns a new struct each time, until a NULL is returned as a signal that there is no more to get at this point. The @@ -63,7 +63,7 @@ struct CURLMsg { ~~~ When **msg** is *CURLMSG_DONE*, the message identifies a transfer that is done, and then **result** contains the return code for the easy handle -that just completed. +that completed. At this point, there are no other **msg** types defined. diff --git a/docs/libcurl/curl_multi_perform.md b/docs/libcurl/curl_multi_perform.md index 06ed490057..e0335d81a8 100644 --- a/docs/libcurl/curl_multi_perform.md +++ b/docs/libcurl/curl_multi_perform.md @@ -40,8 +40,8 @@ or a timeout has elapsed, the application should call this function to read/write whatever there is to read or write right now etc. curl_multi_perform(3) returns as soon as the reads/writes are done. This function does not require that there actually is any data available for -reading or that data can be written, it can be called just in case. It stores -the number of handles that still transfer data in the second argument's +reading or that data can be written, it can be called as a precaution. It +stores the number of handles that still transfer data in the second argument's integer-pointer. If the amount of *running_handles* is changed from the previous call (or diff --git a/docs/libcurl/curl_multi_remove_handle.md b/docs/libcurl/curl_multi_remove_handle.md index b36a62d235..fed6439c44 100644 --- a/docs/libcurl/curl_multi_remove_handle.md +++ b/docs/libcurl/curl_multi_remove_handle.md @@ -37,7 +37,7 @@ Removing an easy handle while being in use is perfectly legal and effectively halts the transfer in progress involving that easy handle. All other easy handles and transfers remain unaffected. -It is fine to remove a handle at any time during a transfer, just not from +It is fine to remove a handle at any time during a transfer, but not from within any libcurl callback function. Removing an easy handle from the multi handle before the corresponding diff --git a/docs/libcurl/curl_multi_socket_all.md b/docs/libcurl/curl_multi_socket_all.md index 5428b9786b..48fe980e71 100644 --- a/docs/libcurl/curl_multi_socket_all.md +++ b/docs/libcurl/curl_multi_socket_all.md @@ -38,8 +38,8 @@ still running easy handles within the multi handle. When this number reaches zero, all transfers are complete/done. Force libcurl to (re-)check all its internal sockets and transfers instead of -just a single one by calling curl_multi_socket_all(3). Note that there should -not be any reason to use this function. +a single one by calling curl_multi_socket_all(3). Note that there should not +be any reason to use this function. # %PROTOCOLS% diff --git a/docs/libcurl/curl_multi_timeout.md b/docs/libcurl/curl_multi_timeout.md index 273ab79538..5995f6528c 100644 --- a/docs/libcurl/curl_multi_timeout.md +++ b/docs/libcurl/curl_multi_timeout.md @@ -45,9 +45,9 @@ An application that uses the *multi_socket* API should not use this function. It should instead use the CURLMOPT_TIMERFUNCTION(3) option for proper and desired behavior. -Note: if libcurl returns a -1 timeout here, it just means that libcurl -currently has no stored timeout value. You must not wait too long (more than a -few seconds perhaps) before you call curl_multi_perform(3) again. +Note: if libcurl returns a -1 timeout here, it means that libcurl currently +has no stored timeout value. You must not wait too long (more than a few +seconds perhaps) before you call curl_multi_perform(3) again. # %PROTOCOLS% diff --git a/docs/libcurl/curl_multi_waitfds.md b/docs/libcurl/curl_multi_waitfds.md index 4d6611aa26..7a4dac894b 100644 --- a/docs/libcurl/curl_multi_waitfds.md +++ b/docs/libcurl/curl_multi_waitfds.md @@ -48,7 +48,7 @@ If the *fd_count* argument is not a null pointer, it points to a variable that on return specifies the number of descriptors used by the multi_handle to be checked for being ready to read or write. -The client code can pass *size* equal to zero just to get the number of the +The client code can pass *size* equal to zero to get the number of the descriptors and allocate appropriate storage for them to be used in a subsequent function call. In this case, *fd_count* receives a number greater than or equal to the number of descriptors. diff --git a/docs/libcurl/curl_url_get.md b/docs/libcurl/curl_url_get.md index bed1c02f17..9916166f97 100644 --- a/docs/libcurl/curl_url_get.md +++ b/docs/libcurl/curl_url_get.md @@ -187,8 +187,8 @@ If the hostname is a numeric IPv6 address, this field might also be set. ## CURLUPART_PORT -A port cannot be URL decoded on get. This number is returned in a string just -like all other parts. That string is guaranteed to hold a valid port number in +A port cannot be URL decoded on get. This number is returned in a string like +all other parts. That string is guaranteed to hold a valid port number in ASCII using base 10. ## CURLUPART_PATH diff --git a/docs/libcurl/curl_version_info.md b/docs/libcurl/curl_version_info.md index 20e84fbd73..83c7cdb9fe 100644 --- a/docs/libcurl/curl_version_info.md +++ b/docs/libcurl/curl_version_info.md @@ -116,7 +116,7 @@ new the libcurl you are using is. You are however guaranteed to get a struct that you have a matching struct for in the header, as you tell libcurl your "age" with the input argument. -*version* is just an ASCII string for the libcurl version. +*version* is an ASCII string for the libcurl version. *version_num* is a 24-bit number created like this: \<8 bits major number\> | \<8 bits minor number\> | \<8 bits patch number\>. Version 7.9.8 is therefore diff --git a/docs/libcurl/libcurl-errors.md b/docs/libcurl/libcurl-errors.md index e406e0ab61..7ae1319dcf 100644 --- a/docs/libcurl/libcurl-errors.md +++ b/docs/libcurl/libcurl-errors.md @@ -29,11 +29,10 @@ Why they occur and possibly what you can do to fix the problem are also included # CURLcode Almost all "easy" interface functions return a CURLcode error code. No matter -what, using the curl_easy_setopt(3) option CURLOPT_ERRORBUFFER(3) -is a good idea as it gives you a human readable error string that may offer -more details about the cause of the error than just the error code. -curl_easy_strerror(3) can be called to get an error string from a given -CURLcode number. +what, using the curl_easy_setopt(3) option CURLOPT_ERRORBUFFER(3) is a good +idea as it gives you a human readable error string that may offer more details +about the cause of the error than the error code alone. curl_easy_strerror(3) +can be called to get an error string from a given CURLcode number. CURLcode is one of the following: @@ -45,8 +44,7 @@ All fine. Proceed as usual. The URL you passed to libcurl used a protocol that this libcurl does not support. The support might be a compile-time option that you did not use, it -can be a misspelled protocol string or just a protocol libcurl has no code -for. +can be a misspelled protocol string or a protocol libcurl has no code for. ## CURLE_FAILED_INIT (2) diff --git a/docs/libcurl/libcurl-multi.md b/docs/libcurl/libcurl-multi.md index 01bef3a77f..b400f61a03 100644 --- a/docs/libcurl/libcurl-multi.md +++ b/docs/libcurl/libcurl-multi.md @@ -130,13 +130,12 @@ using large numbers of simultaneous connections. curl_multi_socket_action(3) is then used instead of curl_multi_perform(3). -When using this API, you add easy handles to the multi handle just as with the +When using this API, you add easy handles to the multi handle like with the normal multi interface. Then you also set two callbacks with the -CURLMOPT_SOCKETFUNCTION(3) and CURLMOPT_TIMERFUNCTION(3) options -to curl_multi_setopt(3). They are two callback functions that libcurl -calls with information about what sockets to wait for, and for what activity, -and what the current timeout time is - if that expires libcurl should be -notified. +CURLMOPT_SOCKETFUNCTION(3) and CURLMOPT_TIMERFUNCTION(3) options to +curl_multi_setopt(3). They are two callback functions that libcurl calls with +information about what sockets to wait for, and for what activity, and what +the current timeout time is - if that expires libcurl should be notified. The multi_socket API is designed to inform your application about which sockets libcurl is currently using and for what activities (read and/or write) diff --git a/docs/libcurl/libcurl-security.md b/docs/libcurl/libcurl-security.md index 0dc71410a5..17d9914c36 100644 --- a/docs/libcurl/libcurl-security.md +++ b/docs/libcurl/libcurl-security.md @@ -64,8 +64,8 @@ plain text anywhere. Many of the protocols libcurl supports send name and password unencrypted as clear text (HTTP Basic authentication, FTP, TELNET etc). It is easy for anyone -on your network or a network nearby yours to just fire up a network analyzer -tool and eavesdrop on your passwords. Do not let the fact that HTTP Basic uses +on your network or a network nearby yours to fire up a network analyzer tool +and eavesdrop on your passwords. Do not let the fact that HTTP Basic uses base64 encoded passwords fool you. They may not look readable at a first glance, but they are easily "deciphered" by anyone within seconds. @@ -118,11 +118,11 @@ transfers require a new connection with validation performed again. # Redirects -The CURLOPT_FOLLOWLOCATION(3) option automatically follows HTTP -redirects sent by a remote server. These redirects can refer to any kind of -URL, not just HTTP. libcurl restricts the protocols allowed to be used in -redirects for security reasons: only HTTP, HTTPS, FTP and FTPS are -enabled by default. Applications may opt to restrict that set further. +The CURLOPT_FOLLOWLOCATION(3) option automatically follows HTTP redirects sent +by a remote server. These redirects can refer to any kind of URL, not only +HTTP. libcurl restricts the protocols allowed to be used in redirects for +security reasons: only HTTP, HTTPS, FTP and FTPS are enabled by default. +Applications may opt to restrict that set further. A redirect to a file: URL would cause the libcurl to read (or write) arbitrary files from the local file system. If the application returns the data back to @@ -131,8 +131,8 @@ leverage this to read otherwise forbidden data (e.g. **file://localhost/etc/passwd**). If authentication credentials are stored in the ~/.netrc file, or Kerberos is -in use, any other URL type (not just file:) that requires authentication is -also at risk. A redirect such as **ftp://some-internal-server/private-file** would +in use, any other URL type (except file:) that requires authentication is also +at risk. A redirect such as **ftp://some-internal-server/private-file** would then return data even when the server is password protected. In the same way, if an unencrypted SSH private key has been configured for the @@ -178,7 +178,7 @@ of a server behind a firewall, such as 127.0.0.1 or 10.1.2.3. Applications can mitigate against this by setting a CURLOPT_OPENSOCKETFUNCTION(3) or CURLOPT_PREREQFUNCTION(3) and checking the address before a connection. -All the malicious scenarios regarding redirected URLs apply just as well to +All the malicious scenarios regarding redirected URLs apply equally to non-redirected URLs, if the user is allowed to specify an arbitrary URL that could point to a private resource. For example, a web app providing a translation service might happily translate **file://localhost/etc/passwd** @@ -211,15 +211,15 @@ or a mix of decimal, octal or hexadecimal encoding. # IPv6 Addresses -libcurl handles IPv6 addresses transparently and just as easily as IPv4 -addresses. That means that a sanitizing function that filters out addresses -like 127.0.0.1 is not sufficient - the equivalent IPv6 addresses **::1**, -**::**, **0:00::0:1**, **::127.0.0.1** and **::ffff:7f00:1** supplied -somehow by an attacker would all bypass a naive filter and could allow access -to undesired local resources. IPv6 also has special address blocks like -link-local and site-local that generally should not be accessed by a -server-side libcurl-using application. A poorly configured firewall installed -in a data center, organization or server may also be configured to limit IPv4 +libcurl handles IPv6 addresses transparently and as easily as IPv4 addresses. +That means that a sanitizing function that filters out addresses like +127.0.0.1 is not sufficient - the equivalent IPv6 addresses **::1**, **::**, +**0:00::0:1**, **::127.0.0.1** and **::ffff:7f00:1** supplied somehow by an +attacker would all bypass a naive filter and could allow access to undesired +local resources. IPv6 also has special address blocks like link-local and +site-local that generally should not be accessed by a server-side +libcurl-using application. A poorly configured firewall installed in a data +center, organization or server may also be configured to limit IPv4 connections but leave IPv6 connections wide open. In some cases, setting CURLOPT_IPRESOLVE(3) to CURL_IPRESOLVE_V4 can be used to limit resolved addresses to IPv4 only and bypass these issues. @@ -294,7 +294,7 @@ system. The conclusion we have come to is that this is a weakness or feature in the Windows operating system itself, that we as an application cannot safely -protect users against. It would just be a whack-a-mole race we do not want to +protect users against. It would make a whack-a-mole race we do not want to participate in. There are too many ways to do it and there is no knob we can use to turn off the practice. @@ -333,8 +333,8 @@ libcurl programs can use CURLOPT_PROTOCOLS_STR(3) to limit what URL schemes it a ## consider not allowing the user to set the full URL -Maybe just let the user provide data for parts of it? Or maybe filter input to -only allow specific choices? Remember that the naive approach of appending a +Maybe let the user provide data for parts of it? Or maybe filter input to only +allow specific choices? Remember that the naive approach of appending a user-specified string to a base URL could still allow unexpected results through use of characters like ../ or ? or Unicode characters or hiding characters using various escaping means. @@ -396,10 +396,10 @@ using a SOCKS or HTTP proxy in between curl and the target server. # Denial of Service A malicious server could cause libcurl to effectively hang by sending data -slowly, or even no data at all but just keeping the TCP connection open. This -could effectively result in a denial-of-service attack. The -CURLOPT_TIMEOUT(3) and/or CURLOPT_LOW_SPEED_LIMIT(3) options can -be used to mitigate against this. +slowly, or even no data at all but keeping the TCP connection open. This could +effectively result in a denial-of-service attack. The CURLOPT_TIMEOUT(3) +and/or CURLOPT_LOW_SPEED_LIMIT(3) options can be used to mitigate against +this. A malicious server could cause libcurl to download an infinite amount of data, potentially causing system resources to be exhausted resulting in a system or @@ -455,8 +455,8 @@ passwords, things like URLs, cookies or even filenames could also hold sensitive data. To avoid this problem, you must of course use your common sense. Often, you -can just edit out the sensitive data or just search/replace your true -information with faked data. +can edit out the sensitive data or search/replace your true information with +faked data. # setuid applications using libcurl @@ -515,6 +515,6 @@ cookies. # Report Security Problems -Should you detect or just suspect a security problem in libcurl or curl, -contact the project curl security team immediately. See +Should you detect or suspect a security problem in libcurl or curl, contact +the project curl security team immediately. See https://curl.se/dev/secprocess.html for details. diff --git a/docs/libcurl/libcurl-tutorial.md b/docs/libcurl/libcurl-tutorial.md index 696fb5489b..c2a409088f 100644 --- a/docs/libcurl/libcurl-tutorial.md +++ b/docs/libcurl/libcurl-tutorial.md @@ -92,9 +92,9 @@ The people behind libcurl have put a considerable effort to make libcurl work on a large amount of different operating systems and environments. You program libcurl the same way on all platforms that libcurl runs on. There -are only a few minor details that differ. If you just make sure to write your -code portable enough, you can create a portable program. libcurl should not -stop you from that. +are only a few minor details that differ. If you make sure to write your code +portable enough, you can create a portable program. libcurl should not stop +you from that. # Global Preparation @@ -171,7 +171,7 @@ Get an easy handle with handle = curl_easy_init(); ~~~ It returns an easy handle. Using that you proceed to the next step: setting -up your preferred actions. A handle is just a logic entity for the upcoming +up your preferred actions. A handle is a logic entity for the upcoming transfer or series of transfers. You set properties and options for this handle using @@ -311,8 +311,8 @@ uploading to a remote FTP site is similar to uploading data to an HTTP server with a PUT request. Of course, first you either create an easy handle or you reuse one existing -one. Then you set the URL to operate on just like before. This is the remote -URL, that we now upload. +one. Then you set the URL to operate on like before. This is the remote URL, +that we now upload. Since we write an application, we most likely want libcurl to get the upload data by asking us for it. To make it do that, we set the read callback and the @@ -620,15 +620,17 @@ handle: ~~~ Since all options on an easy handle are "sticky", they remain the same until -changed even if you do call curl_easy_perform(3), you may need to tell -curl to go back to a plain GET request if you intend to do one as your next -request. You force an easy handle to go back to GET by using the -CURLOPT_HTTPGET(3) option: +changed even if you do call curl_easy_perform(3), you may need to tell curl to +go back to a plain GET request if you intend to do one as your next request. +You force an easy handle to go back to GET by using the CURLOPT_HTTPGET(3) +option: + ~~~c curl_easy_setopt(handle, CURLOPT_HTTPGET, 1L); ~~~ -Just setting CURLOPT_POSTFIELDS(3) to "" or NULL does *not* stop libcurl -from doing a POST. It just makes it POST without any data to send! + +Setting CURLOPT_POSTFIELDS(3) to "" or NULL does *not* stop libcurl from doing +a POST. It makes it POST without any data to send! # Converting from deprecated form API to MIME API @@ -956,10 +958,10 @@ Mozilla JavaScript engine in the past. Re-cycling the same easy handle several times when doing multiple requests is the way to go. -After each single curl_easy_perform(3) operation, libcurl keeps the -connection alive and open. A subsequent request using the same easy handle to -the same host might just be able to use the already open connection! This -reduces network impact a lot. +After each single curl_easy_perform(3) operation, libcurl keeps the connection +alive and open. A subsequent request using the same easy handle to the same +host might be able to reuse the already open connection! This reduces network +impact a lot. Even if the connection is dropped, all connections involving SSL to the same host again, benefit from libcurl's session ID cache that drastically reduces @@ -978,9 +980,9 @@ may also be added in the future. Each easy handle attempts to keep the last few connections alive for a while in case they are to be used again. You can set the size of this "cache" with -the CURLOPT_MAXCONNECTS(3) option. Default is 5. There is rarely any -point in changing this value, and if you think of changing this it is often -just a matter of thinking again. +the CURLOPT_MAXCONNECTS(3) option. Default is 5. There is rarely any point in +changing this value, and if you think of changing this it is often a reason to +think again. To force your upcoming request to not use an already existing connection, you can do that by setting CURLOPT_FRESH_CONNECT(3) to 1. In a similar @@ -1025,9 +1027,9 @@ libcurl is your friend here too. ## CURLOPT_CUSTOMREQUEST -If just changing the actual HTTP request keyword is what you want, like when -GET, HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST(3) -is there for you. It is simple to use: +If changing the actual HTTP request keyword is what you want, like when GET, +HEAD or POST is not good enough for you, CURLOPT_CUSTOMREQUEST(3) is there for +you. It is simple to use: ~~~c curl_easy_setopt(handle, CURLOPT_CUSTOMREQUEST, "MYOWNREQUEST"); @@ -1152,8 +1154,8 @@ content transfer is performed. ## FTP Custom CURLOPT_CUSTOMREQUEST If you do want to list the contents of an FTP directory using your own defined -FTP command, CURLOPT_CUSTOMREQUEST(3) does just that. "NLST" is the default -one for listing directories but you are free to pass in your idea of a good +FTP command, CURLOPT_CUSTOMREQUEST(3) does that. "NLST" is the default one for +listing directories but you are free to pass in your idea of a good alternative. # Cookies Without Chocolate Chips @@ -1170,8 +1172,8 @@ update them. Server use cookies to "track" users and to keep "sessions". Cookies are sent from server to clients with the header Set-Cookie: and they are sent from clients to servers with the Cookie: header. -To just send whatever cookie you want to a server, you can use -CURLOPT_COOKIE(3) to set a cookie string like this: +To send whatever cookie you want to a server, you can use CURLOPT_COOKIE(3) to +set a cookie string like this: ~~~c curl_easy_setopt(handle, CURLOPT_COOKIE, "name1=var1; name2=var2;"); @@ -1186,16 +1188,15 @@ when you make a request, you tell libcurl to read the previous headers to figure out which cookies to use. Set the header file to read cookies from with CURLOPT_COOKIEFILE(3). -The CURLOPT_COOKIEFILE(3) option also automatically enables the cookie -parser in libcurl. Until the cookie parser is enabled, libcurl does not parse -or understand incoming cookies and they are just be ignored. However, when the +The CURLOPT_COOKIEFILE(3) option also automatically enables the cookie parser +in libcurl. Until the cookie parser is enabled, libcurl does not parse or +understand incoming cookies and they are instead ignored. However, when the parser is enabled the cookies are understood and the cookies are kept in -memory and used properly in subsequent requests when the same handle is -used. Many times this is enough, and you may not have to save the cookies to -disk at all. Note that the file you specify to CURLOPT_COOKIEFILE(3) -does not have to exist to enable the parser, so a common way to just enable -the parser and not read any cookies is to use the name of a file you know does -not exist. +memory and used properly in subsequent requests when the same handle is used. +Many times this is enough, and you may not have to save the cookies to disk at +all. Note that the file you specify to CURLOPT_COOKIEFILE(3) does not have to +exist to enable the parser, so a common way to enable the parser and not read +any cookies is to use the name of a file you know does not exist. If you would rather use existing cookies that you have previously received with your Netscape or Mozilla browsers, you can make libcurl use that cookie @@ -1370,9 +1371,9 @@ multiple transfers at the same time by adding up multiple easy handles into a "multi stack". You create the easy handles you want, one for each concurrent transfer, and -you set all the options just like you learned above, and then you create a -multi handle with curl_multi_init(3) and add all those easy handles to -that multi handle with curl_multi_add_handle(3). +you set all the options like you learned above, and then you create a multi +handle with curl_multi_init(3) and add all those easy handles to that multi +handle with curl_multi_add_handle(3). When you have added the handles you have for the moment (you can still add new ones at any time), you start the transfers by calling diff --git a/docs/libcurl/libcurl-url.md b/docs/libcurl/libcurl-url.md index 82de7e7821..b39d0304d5 100644 --- a/docs/libcurl/libcurl-url.md +++ b/docs/libcurl/libcurl-url.md @@ -45,7 +45,7 @@ When done with it, clean it up with curl_url_cleanup(3) # DUPLICATE -When you need a copy of a handle, just duplicate it with curl_url_dup(3): +When you need a copy of a handle, duplicate it with curl_url_dup(3): ~~~c CURLU *nh = curl_url_dup(h); ~~~ diff --git a/docs/libcurl/libcurl.m4 b/docs/libcurl/libcurl.m4 index 6ff52cc1b4..c76a5e8971 100644 --- a/docs/libcurl/libcurl.m4 +++ b/docs/libcurl/libcurl.m4 @@ -219,7 +219,7 @@ AC_DEFUN([LIBCURL_CHECK_CONFIG], if test -z "$_libcurl_protocols"; then - # We do not have --protocols, so just assume that all + # We do not have --protocols; assume that all # protocols are available _libcurl_protocols="HTTP FTP FILE TELNET LDAP DICT TFTP" diff --git a/docs/libcurl/libcurl.md b/docs/libcurl/libcurl.md index 47bf210087..669550a3ac 100644 --- a/docs/libcurl/libcurl.md +++ b/docs/libcurl/libcurl.md @@ -192,8 +192,8 @@ libcurl at all. Call curl_global_cleanup(3) immediately before the program exits, when the program is again only one thread and after its last use of libcurl. -It is not actually required that the functions be called at the beginning -and end of the program -- that is just usually the easiest way to do it. +It is not actually required that the functions be called at the beginning and +end of the program -- that is usually the easiest way to do it. You can call both of these multiple times, as long as all calls meet these requirements and the number of calls to each is the same. @@ -205,13 +205,13 @@ other parts of the program -- it does not know whether they use libcurl or not. Its code does not necessarily run at the start and end of the whole program. -A module like this must have global constant functions of its own, just like -curl_global_init(3) and curl_global_cleanup(3). The module thus -has control at the beginning and end of the program and has a place to call -the libcurl functions. If multiple modules in the program use libcurl, they -all separately call the libcurl functions, and that is OK because only the -first curl_global_init(3) and the last curl_global_cleanup(3) in a -program change anything. (libcurl uses a reference count in static memory). +A module like this must have global constant functions of its own, like +curl_global_init(3) and curl_global_cleanup(3). The module thus has control at +the beginning and end of the program and has a place to call the libcurl +functions. If multiple modules in the program use libcurl, they all separately +call the libcurl functions, and that is OK because only the first +curl_global_init(3) and the last curl_global_cleanup(3) in a program change +anything. (libcurl uses a reference count in static memory). In a C++ module, it is common to deal with the global constant situation by defining a special class that represents the global constant environment of diff --git a/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME.md b/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME.md index 21ca98cf4b..5f6ade455c 100644 --- a/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME.md +++ b/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME.md @@ -30,7 +30,7 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_PRETRANSFER_TIME, # DESCRIPTION Pass a pointer to a double to receive the time, in seconds, it took from the -start until the file transfer is just about to begin. +start until the file transfer is about to begin. This time-stamp includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved. It includes the sending of diff --git a/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME_T.md b/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME_T.md index 55f18eed68..1cc300341e 100644 --- a/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME_T.md +++ b/docs/libcurl/opts/CURLINFO_PRETRANSFER_TIME_T.md @@ -30,7 +30,7 @@ CURLcode curl_easy_getinfo(CURL *handle, CURLINFO_PRETRANSFER_TIME_T, # DESCRIPTION Pass a pointer to a curl_off_t to receive the time, in microseconds, it took -from the start until the file transfer is just about to begin. +from the start until the file transfer is about to begin. This time-stamp includes all pre-transfer commands and negotiations that are specific to the particular protocol(s) involved. It includes the sending of diff --git a/docs/libcurl/opts/CURLINFO_TLS_SSL_PTR.md b/docs/libcurl/opts/CURLINFO_TLS_SSL_PTR.md index 75016a421a..95ef6013ee 100644 --- a/docs/libcurl/opts/CURLINFO_TLS_SSL_PTR.md +++ b/docs/libcurl/opts/CURLINFO_TLS_SSL_PTR.md @@ -54,7 +54,7 @@ The *backend* struct member is one of these defines: CURLSSLBACKEND_NONE (when built without TLS support), CURLSSLBACKEND_WOLFSSL, CURLSSLBACKEND_SECURETRANSPORT, CURLSSLBACKEND_GNUTLS, CURLSSLBACKEND_MBEDTLS, CURLSSLBACKEND_NSS, CURLSSLBACKEND_OPENSSL or CURLSSLBACKEND_SCHANNEL. (Note -that the OpenSSL forks are all reported as just OpenSSL here.) +that the OpenSSL forks are all reported as OpenSSL here.) The *internals* struct member points to a TLS library specific pointer for the active ("in use") SSL connection, with the following underlying types: diff --git a/docs/libcurl/opts/CURLMOPT_PUSHFUNCTION.md b/docs/libcurl/opts/CURLMOPT_PUSHFUNCTION.md index 3bbc517432..4bead2b94f 100644 --- a/docs/libcurl/opts/CURLMOPT_PUSHFUNCTION.md +++ b/docs/libcurl/opts/CURLMOPT_PUSHFUNCTION.md @@ -63,11 +63,11 @@ usual. If the callback returns CURL_PUSH_OK, the new easy handle is added to the multi handle, the callback must not do that by itself. -The callback can access PUSH_PROMISE headers with two accessor -functions. These functions can only be used from within this callback and they -can only access the PUSH_PROMISE headers: curl_pushheader_byname(3) and -curl_pushheader_bynum(3). The normal response headers are passed to the -header callback for pushed streams just as for normal streams. +The callback can access PUSH_PROMISE headers with two accessor functions. +These functions can only be used from within this callback and they can only +access the PUSH_PROMISE headers: curl_pushheader_byname(3) and +curl_pushheader_bynum(3). The normal response headers are passed to the header +callback for pushed streams like for normal streams. The header fields can also be accessed with curl_easy_header(3), introduced in later libcurl versions. diff --git a/docs/libcurl/opts/CURLOPT_ACCEPT_ENCODING.md b/docs/libcurl/opts/CURLOPT_ACCEPT_ENCODING.md index cc52177b84..063b48e6d6 100644 --- a/docs/libcurl/opts/CURLOPT_ACCEPT_ENCODING.md +++ b/docs/libcurl/opts/CURLOPT_ACCEPT_ENCODING.md @@ -52,8 +52,8 @@ Set CURLOPT_ACCEPT_ENCODING(3) to NULL to explicitly disable it, which makes libcurl not send an Accept-Encoding: header and not decompress received contents automatically. -You can also opt to just include the Accept-Encoding: header in your request -with CURLOPT_HTTPHEADER(3) but then there is no automatic decompressing when +You can also opt to include the `Accept-Encoding:` header in your request with +CURLOPT_HTTPHEADER(3) but then there is no automatic decompressing when receiving data. Setting this option is a request, not an order; the server may or may not do diff --git a/docs/libcurl/opts/CURLOPT_AWS_SIGV4.md b/docs/libcurl/opts/CURLOPT_AWS_SIGV4.md index cc38425cd7..286e9d5eea 100644 --- a/docs/libcurl/opts/CURLOPT_AWS_SIGV4.md +++ b/docs/libcurl/opts/CURLOPT_AWS_SIGV4.md @@ -62,8 +62,8 @@ Example with "Test:Try", when curl uses the algorithm, it generates for "date", **"test4_request"** for "request type", **"SignedHeaders=content-type;host;x-try-date"** for "signed headers" -If you use just "test", instead of "test:try", test is used for every -generated string. +If you use "test", instead of "test:try", test is used for every generated +string. Setting CURLOPT_HTTPAUTH(3) with the CURLAUTH_AWS_SIGV4 bit set is the same as setting this option with a **"aws:amz"** parameter. diff --git a/docs/libcurl/opts/CURLOPT_BUFFERSIZE.md b/docs/libcurl/opts/CURLOPT_BUFFERSIZE.md index a08cd8be18..e517a30bc4 100644 --- a/docs/libcurl/opts/CURLOPT_BUFFERSIZE.md +++ b/docs/libcurl/opts/CURLOPT_BUFFERSIZE.md @@ -33,7 +33,7 @@ in libcurl. The main point of this would be that the write callback gets called more often and with smaller chunks. Secondly, for some protocols, there is a benefit of having a larger buffer for performance. -This is just treated as a request, not an order. You cannot be guaranteed to +This is treated as a request, not an order. You cannot be guaranteed to actually get the given size. This buffer size is by default *CURL_MAX_WRITE_SIZE* (16kB). The maximum @@ -45,10 +45,10 @@ transfer as that may lead to unintended consequences. The maximum size was 512kB until 7.88.0. -Starting in libcurl 8.7.0, there is just a single transfer buffer allocated -per multi handle. This buffer is used by all easy handles added to a multi -handle no matter how many parallel transfers there are. The buffer remains -allocated as long as there are active transfers. +Starting in libcurl 8.7.0, there is a single transfer buffer allocated per +multi handle. This buffer is used by all easy handles added to a multi handle +no matter how many parallel transfers there are. The buffer remains allocated +as long as there are active transfers. # DEFAULT diff --git a/docs/libcurl/opts/CURLOPT_COOKIEFILE.md b/docs/libcurl/opts/CURLOPT_COOKIEFILE.md index 330f3d57d4..36403f9a4d 100644 --- a/docs/libcurl/opts/CURLOPT_COOKIEFILE.md +++ b/docs/libcurl/opts/CURLOPT_COOKIEFILE.md @@ -29,7 +29,7 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_COOKIEFILE, char *filename); Pass a pointer to a null-terminated string as parameter. It should point to the filename of your file holding cookie data to read. The cookie data can be -in either the old Netscape / Mozilla cookie data format or just regular HTTP +in either the old Netscape / Mozilla cookie data format or regular HTTP headers (Set-Cookie style) dumped to a file. It also enables the cookie engine, making libcurl parse and send cookies on @@ -37,7 +37,7 @@ subsequent requests with this handle. By passing the empty string ("") to this option, you enable the cookie engine without reading any initial cookies. If you tell libcurl the filename is "-" -(just a single minus sign), libcurl instead reads from stdin. +(a single minus sign), libcurl instead reads from stdin. This option only **reads** cookies. To make libcurl write cookies to file, see CURLOPT_COOKIEJAR(3). diff --git a/docs/libcurl/opts/CURLOPT_COOKIELIST.md b/docs/libcurl/opts/CURLOPT_COOKIELIST.md index 29ee3dfb86..66c2429171 100644 --- a/docs/libcurl/opts/CURLOPT_COOKIELIST.md +++ b/docs/libcurl/opts/CURLOPT_COOKIELIST.md @@ -31,7 +31,7 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_COOKIELIST, Pass a char pointer to a *cookie* string. -Such a cookie can be either a single line in Netscape / Mozilla format or just +Such a cookie can be either a single line in Netscape / Mozilla format or regular HTTP-style header (`Set-Cookie:`) format. This option also enables the cookie engine. This adds that single cookie to the internal cookie store. diff --git a/docs/libcurl/opts/CURLOPT_DOH_SSL_VERIFYPEER.md b/docs/libcurl/opts/CURLOPT_DOH_SSL_VERIFYPEER.md index 8a59d75c67..5c2860d57b 100644 --- a/docs/libcurl/opts/CURLOPT_DOH_SSL_VERIFYPEER.md +++ b/docs/libcurl/opts/CURLOPT_DOH_SSL_VERIFYPEER.md @@ -64,9 +64,9 @@ is done independently of the CURLOPT_DOH_SSL_VERIFYPEER(3) option. **WARNING:** disabling verification of the certificate allows bad guys to man-in-the-middle the communication without you knowing it. Disabling -verification makes the communication insecure. Just having encryption on a -transfer is not enough as you cannot be sure that you are communicating with -the correct end-point. +verification makes the communication insecure. Having encryption on a transfer +is not enough as you cannot be sure that you are communicating with the +correct end-point. # DEFAULT diff --git a/docs/libcurl/opts/CURLOPT_ERRORBUFFER.md b/docs/libcurl/opts/CURLOPT_ERRORBUFFER.md index 497a8fab8b..6ba194aabb 100644 --- a/docs/libcurl/opts/CURLOPT_ERRORBUFFER.md +++ b/docs/libcurl/opts/CURLOPT_ERRORBUFFER.md @@ -31,9 +31,9 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_ERRORBUFFER, char *buf); # DESCRIPTION Pass a char pointer to a buffer that libcurl may use to store human readable -error messages on failures or problems. This may be more helpful than just the -return code from curl_easy_perform(3) and related functions. The buffer must -be at least **CURL_ERROR_SIZE** bytes big. +error messages on failures or problems. This may be more helpful than the +single return code from curl_easy_perform(3) and related functions. The buffer +must be at least **CURL_ERROR_SIZE** bytes big. You must keep the associated buffer available until libcurl no longer needs it. Failing to do so might cause odd behavior or even crashes. libcurl might diff --git a/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.md b/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.md index f67f85c6f2..307d7a5582 100644 --- a/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.md +++ b/docs/libcurl/opts/CURLOPT_FOLLOWLOCATION.md @@ -59,9 +59,9 @@ request the same way as the previous one; including the request body if one was provided. For users who think the existing location following is too naive, too simple -or just lacks features, it is easy to instead implement your own redirect -follow logic with the use of curl_easy_getinfo(3)'s CURLINFO_REDIRECT_URL(3) -option instead of using CURLOPT_FOLLOWLOCATION(3). +or lacking features, it is easy to instead implement your own redirect follow +logic with the use of curl_easy_getinfo(3)'s CURLINFO_REDIRECT_URL(3) option +instead of using CURLOPT_FOLLOWLOCATION(3). By default, libcurl only sends `Authorization:` or explicitly set `Cookie:` headers to the initial host given in the original URL, to avoid leaking @@ -77,9 +77,9 @@ Pick one of the following modes: ## CURLFOLLOW_ALL (1) -Before 8.13.0 this bit had no name and 1L was just the value to enable this -option. This makes a set custom method be used in all HTTP requests, even -after redirects. +Before 8.13.0 this bit had no name and 1L was the value to enable this option. +This makes a set custom method be used in all HTTP requests, even after +redirects. ## CURLFOLLOW_OBEYCODE (2) diff --git a/docs/libcurl/opts/CURLOPT_FTPPORT.md b/docs/libcurl/opts/CURLOPT_FTPPORT.md index 3570b91033..fe60393633 100644 --- a/docs/libcurl/opts/CURLOPT_FTPPORT.md +++ b/docs/libcurl/opts/CURLOPT_FTPPORT.md @@ -33,9 +33,9 @@ IP address to use for the FTP PORT instruction. The PORT instruction tells the remote server to do a TCP connect to our specified IP address. The string may be a plain IP address, a hostname, a -network interface name (under Unix) or just a '-' symbol to let the library -use your system's default IP address. Default FTP operations are passive, and -does not use the PORT command. +network interface name (under Unix) or a '-' symbol to let the library use +your system's default IP address. Default FTP operations are passive, and does +not use the PORT command. The address can be followed by a ':' to specify a port, optionally followed by a '-' to specify a port range. If the port specified is 0, the operating diff --git a/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.md b/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.md index 060ebf5bdd..d435e27f44 100644 --- a/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.md +++ b/docs/libcurl/opts/CURLOPT_HEADERFUNCTION.md @@ -67,11 +67,11 @@ CURLOPT_WRITEFUNCTION(3), or if it is not specified or NULL - the default, stream-writing function. It is important to note that the callback is invoked for the headers of all -responses received after initiating a request and not just the final -response. This includes all responses which occur during authentication -negotiation. If you need to operate on only the headers from the final -response, you need to collect headers in the callback yourself and use HTTP -status lines, for example, to delimit response boundaries. +responses received after initiating a request and not the final response. This +includes all responses which occur during authentication negotiation. If you +need to operate on only the headers from the final response, you need to +collect headers in the callback yourself and use HTTP status lines, for +example, to delimit response boundaries. For an HTTP transfer, the status line and the blank line preceding the response body are both included as headers and passed to this function. @@ -95,7 +95,7 @@ curl_easy_header(3). libcurl does not unfold HTTP "folded headers" (deprecated since RFC 7230). A folded header is a header that continues on a subsequent line and starts with a whitespace. Such folds are passed to the header callback as separate ones, -although strictly they are just continuations of the previous lines. +although strictly they are continuations of the previous lines. # DEFAULT diff --git a/docs/libcurl/opts/CURLOPT_HTTPPROXYTUNNEL.md b/docs/libcurl/opts/CURLOPT_HTTPPROXYTUNNEL.md index 5a3b7390ae..75815c3cb5 100644 --- a/docs/libcurl/opts/CURLOPT_HTTPPROXYTUNNEL.md +++ b/docs/libcurl/opts/CURLOPT_HTTPPROXYTUNNEL.md @@ -33,8 +33,8 @@ difference between using a proxy and to tunnel through it. Tunneling means that an HTTP CONNECT request is sent to the proxy, asking it to connect to a remote host on a specific port number and then the traffic is -just passed through the proxy. Proxies tend to white-list specific port numbers -it allows CONNECT requests to and often only port 80 and 443 are allowed. +passed through the proxy. Proxies tend to white-list specific port numbers it +allows CONNECT requests to and often only port 80 and 443 are allowed. To suppress proxy CONNECT response headers from user callbacks use CURLOPT_SUPPRESS_CONNECT_HEADERS(3). diff --git a/docs/libcurl/opts/CURLOPT_HTTP_VERSION.md b/docs/libcurl/opts/CURLOPT_HTTP_VERSION.md index 1bc23bf7af..f865955d48 100644 --- a/docs/libcurl/opts/CURLOPT_HTTP_VERSION.md +++ b/docs/libcurl/opts/CURLOPT_HTTP_VERSION.md @@ -31,9 +31,9 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_HTTP_VERSION, long version); Pass *version* a long, set to one of the values described below. They ask libcurl to use the specific HTTP versions. -Note that the HTTP version is just a request. libcurl still prioritizes to -reuse existing connections so it might then reuse a connection using an HTTP -version you have not asked for. +Note that the HTTP version is a request. libcurl still prioritizes to reuse +existing connections so it might then reuse a connection using an HTTP version +you have not asked for. ## CURL_HTTP_VERSION_NONE diff --git a/docs/libcurl/opts/CURLOPT_NOBODY.md b/docs/libcurl/opts/CURLOPT_NOBODY.md index 8e772a026f..0f4c2cb990 100644 --- a/docs/libcurl/opts/CURLOPT_NOBODY.md +++ b/docs/libcurl/opts/CURLOPT_NOBODY.md @@ -31,8 +31,8 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_NOBODY, long opt); A long parameter set to 1 tells libcurl to not include the body-part in the output when doing what would otherwise be a download. For HTTP(S), this makes -libcurl do a HEAD request. For most other protocols it means just not asking -to transfer the body data. +libcurl do a HEAD request. For most other protocols it means not asking to +transfer the body data. For HTTP operations when CURLOPT_NOBODY(3) has been set, disabling this option (with 0) makes it a GET again - only if the method is still set to be diff --git a/docs/libcurl/opts/CURLOPT_PINNEDPUBLICKEY.md b/docs/libcurl/opts/CURLOPT_PINNEDPUBLICKEY.md index c868ed86e6..6eb5d0ee3e 100644 --- a/docs/libcurl/opts/CURLOPT_PINNEDPUBLICKEY.md +++ b/docs/libcurl/opts/CURLOPT_PINNEDPUBLICKEY.md @@ -93,7 +93,7 @@ server's certificate. # Windows-specific: # - Use NUL instead of /dev/null. # - OpenSSL may wait for input instead of disconnecting. Hit enter. -# - If you do not have sed, then just copy the certificate into a file: +# - If you do not have sed, then copy the certificate into a file: # Lines from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE-----. # openssl s_client -servername www.example.com -connect www.example.com:443 \ diff --git a/docs/libcurl/opts/CURLOPT_PORT.md b/docs/libcurl/opts/CURLOPT_PORT.md index cd18a24f8c..334f01b3d7 100644 --- a/docs/libcurl/opts/CURLOPT_PORT.md +++ b/docs/libcurl/opts/CURLOPT_PORT.md @@ -34,7 +34,7 @@ This option sets *number* to be the remote port number to connect to, instead of the one specified in the URL or the default port for the used protocol. -Usually, you just let the URL decide which port to use but this allows the +Usually, you let the URL decide which port to use but this allows the application to override that. While this option accepts a 'long', a port number is an unsigned 16-bit number diff --git a/docs/libcurl/opts/CURLOPT_PROXY_PINNEDPUBLICKEY.md b/docs/libcurl/opts/CURLOPT_PROXY_PINNEDPUBLICKEY.md index 70c294c5ac..85a404459f 100644 --- a/docs/libcurl/opts/CURLOPT_PROXY_PINNEDPUBLICKEY.md +++ b/docs/libcurl/opts/CURLOPT_PROXY_PINNEDPUBLICKEY.md @@ -91,7 +91,7 @@ from the https proxy server's certificate. # Windows-specific: # - Use NUL instead of /dev/null. # - OpenSSL may wait for input instead of disconnecting. Hit enter. -# - If you do not have sed, then just copy the certificate into a file: +# - If you do not have sed, then copy the certificate into a file: # Lines from -----BEGIN CERTIFICATE----- to -----END CERTIFICATE-----. # openssl s_client -servername www.example.com -connect www.example.com:443 \ diff --git a/docs/libcurl/opts/CURLOPT_PROXY_SSL_VERIFYPEER.md b/docs/libcurl/opts/CURLOPT_PROXY_SSL_VERIFYPEER.md index 159f070462..ef3f83d47e 100644 --- a/docs/libcurl/opts/CURLOPT_PROXY_SSL_VERIFYPEER.md +++ b/docs/libcurl/opts/CURLOPT_PROXY_SSL_VERIFYPEER.md @@ -59,9 +59,9 @@ done independently of the CURLOPT_PROXY_SSL_VERIFYPEER(3) option. **WARNING:** disabling verification of the certificate allows bad guys to man-in-the-middle the communication without you knowing it. Disabling -verification makes the communication insecure. Just having encryption on a -transfer is not enough as you cannot be sure that you are communicating with -the correct end-point. +verification makes the communication insecure. Having encryption on a transfer +is not enough as you cannot be sure that you are communicating with the +correct end-point. # DEFAULT diff --git a/docs/libcurl/opts/CURLOPT_RESOLVE.md b/docs/libcurl/opts/CURLOPT_RESOLVE.md index 952ff824e7..e923c46752 100644 --- a/docs/libcurl/opts/CURLOPT_RESOLVE.md +++ b/docs/libcurl/opts/CURLOPT_RESOLVE.md @@ -57,7 +57,7 @@ use your provided ADDRESS. The optional leading plus (`+`) specifies that the new entry should timeout. Entries added without the leading plus character never times out whereas -entries added with `+HOST:...` times out just like ordinary DNS cache entries. +entries added with `+HOST:...` times out like ordinary DNS cache entries. If the DNS cache already has an entry for the given host+port pair, the new entry overrides the former one. diff --git a/docs/libcurl/opts/CURLOPT_RTSP_REQUEST.md b/docs/libcurl/opts/CURLOPT_RTSP_REQUEST.md index e42e75b712..4e5a333c5b 100644 --- a/docs/libcurl/opts/CURLOPT_RTSP_REQUEST.md +++ b/docs/libcurl/opts/CURLOPT_RTSP_REQUEST.md @@ -49,7 +49,7 @@ option is used. (The session ID is not needed for this method) When sent by a client, this method changes the description of the session. For example, if a client is using the server to record a meeting, the client can use Announce to inform the server of all the meta-information about the -session. ANNOUNCE acts like an HTTP PUT or POST just like +session. ANNOUNCE acts like an HTTP PUT or POST like *CURL_RTSPREQ_SET_PARAMETER* ## CURL_RTSPREQ_SETUP @@ -82,7 +82,7 @@ different connections. Retrieve a parameter from the server. By default, libcurl adds a *Content-Type: text/parameters* header on all non-empty requests unless a -custom one is set. GET_PARAMETER acts just like an HTTP PUT or POST (see +custom one is set. GET_PARAMETER acts like an HTTP PUT or POST (see *CURL_RTSPREQ_SET_PARAMETER*). Applications wishing to send a heartbeat message (e.g. in the presence of a server-specified timeout) should send use an empty GET_PARAMETER request. diff --git a/docs/libcurl/opts/CURLOPT_SSH_PUBLIC_KEYFILE.md b/docs/libcurl/opts/CURLOPT_SSH_PUBLIC_KEYFILE.md index d7df84fa1b..161daf4ecf 100644 --- a/docs/libcurl/opts/CURLOPT_SSH_PUBLIC_KEYFILE.md +++ b/docs/libcurl/opts/CURLOPT_SSH_PUBLIC_KEYFILE.md @@ -30,7 +30,7 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_SSH_PUBLIC_KEYFILE, Pass a char pointer pointing to a *filename* for your public key. If not used, libcurl defaults to **$HOME/.ssh/id_dsa.pub** if the HOME environment variable -is set, and just "id_dsa.pub" in the current directory if HOME is not set. +is set, and "id_dsa.pub" in the current directory if HOME is not set. If NULL (or an empty string) is passed to this option, libcurl passes no public key to the SSH library, which then rather derives it from the private diff --git a/docs/libcurl/opts/CURLOPT_SSL_CTX_FUNCTION.md b/docs/libcurl/opts/CURLOPT_SSL_CTX_FUNCTION.md index 827ab0ba1a..0536f3c7b3 100644 --- a/docs/libcurl/opts/CURLOPT_SSL_CTX_FUNCTION.md +++ b/docs/libcurl/opts/CURLOPT_SSL_CTX_FUNCTION.md @@ -40,13 +40,13 @@ CURLcode curl_easy_setopt(CURL *handle, CURLOPT_SSL_CTX_FUNCTION, Pass a pointer to your callback function, which should match the prototype shown above. -This callback function gets called by libcurl just before the initialization -of an SSL connection after having processed all other SSL related options to -give a last chance to an application to modify the behavior of the SSL -initialization. The *ssl_ctx* parameter is a pointer to the SSL library's -*SSL_CTX* for OpenSSL or wolfSSL, a pointer to *mbedtls_ssl_config* for -mbedTLS. If an error is returned from the callback no attempt to establish a -connection is made and the perform operation returns the callback's error +This callback function gets called by libcurl immediately before the +initialization of an SSL connection after having processed all other SSL +related options to give a last chance to an application to modify the behavior +of the SSL initialization. The *ssl_ctx* parameter is a pointer to the SSL +library's *SSL_CTX* for OpenSSL or wolfSSL, a pointer to *mbedtls_ssl_config* +for mbedTLS. If an error is returned from the callback no attempt to establish +a connection is made and the perform operation returns the callback's error code. Set the *clientp* argument passed in to this callback with the CURLOPT_SSL_CTX_DATA(3) option. diff --git a/docs/libcurl/opts/CURLOPT_SSL_VERIFYHOST.md b/docs/libcurl/opts/CURLOPT_SSL_VERIFYHOST.md index 4a5d429107..a76127c43d 100644 --- a/docs/libcurl/opts/CURLOPT_SSL_VERIFYHOST.md +++ b/docs/libcurl/opts/CURLOPT_SSL_VERIFYHOST.md @@ -55,9 +55,9 @@ the certificate is signed by a trusted Certificate Authority. **WARNING:** disabling verification of the certificate allows bad guys to man-in-the-middle the communication without you knowing it. Disabling -verification makes the communication insecure. Just having encryption on a -transfer is not enough as you cannot be sure that you are communicating with -the correct end-point. +verification makes the communication insecure. Having encryption on a transfer +is not enough as you cannot be sure that you are communicating with the +correct end-point. When libcurl uses secure protocols it trusts responses and allows for example HSTS and Alt-Svc information to be stored and used subsequently. Disabling diff --git a/docs/libcurl/opts/CURLOPT_SSL_VERIFYPEER.md b/docs/libcurl/opts/CURLOPT_SSL_VERIFYPEER.md index b3c4fcd67a..8aa5275bc8 100644 --- a/docs/libcurl/opts/CURLOPT_SSL_VERIFYPEER.md +++ b/docs/libcurl/opts/CURLOPT_SSL_VERIFYPEER.md @@ -60,9 +60,9 @@ done independently of the CURLOPT_SSL_VERIFYPEER(3) option. **WARNING:** disabling verification of the certificate allows bad guys to man-in-the-middle the communication without you knowing it. Disabling -verification makes the communication insecure. Just having encryption on a -transfer is not enough as you cannot be sure that you are communicating with -the correct end-point. +verification makes the communication insecure. Having encryption on a transfer +is not enough as you cannot be sure that you are communicating with the +correct end-point. When libcurl uses secure protocols it trusts responses and allows for example HSTS and Alt-Svc information to be stored and used subsequently. Disabling diff --git a/docs/libcurl/opts/CURLOPT_UPLOAD_BUFFERSIZE.md b/docs/libcurl/opts/CURLOPT_UPLOAD_BUFFERSIZE.md index 746d14c0dd..425fe23deb 100644 --- a/docs/libcurl/opts/CURLOPT_UPLOAD_BUFFERSIZE.md +++ b/docs/libcurl/opts/CURLOPT_UPLOAD_BUFFERSIZE.md @@ -33,7 +33,7 @@ the next layer in the stack to get sent off. In some setups and for some protocols, there is a huge performance benefit of having a larger upload buffer. -This is just treated as a request, not an order. You cannot be guaranteed to +This is treated as a request, not an order. You cannot be guaranteed to actually get the given size. The upload buffer size is by default 64 kilobytes. The maximum buffer size diff --git a/docs/mk-ca-bundle.md b/docs/mk-ca-bundle.md index bb5b7b1a48..8fb80268bc 100644 --- a/docs/mk-ca-bundle.md +++ b/docs/mk-ca-bundle.md @@ -29,8 +29,7 @@ The default *output* name is **ca-bundle.crt**. By setting it to '-' (a single dash) you get the output sent to STDOUT instead of a file. The PEM format this scripts uses for output makes the result readily available -for use by just about all OpenSSL or GnuTLS powered applications, such as curl -and others. +for use by OpenSSL or GnuTLS powered applications, such as curl and others. # OPTIONS diff --git a/docs/tests/CI.md b/docs/tests/CI.md index f5e2b531a6..bcd663c67b 100644 --- a/docs/tests/CI.md +++ b/docs/tests/CI.md @@ -21,8 +21,8 @@ Every pull request is verified for each of the following: If the pull-request fails one of these tests, it shows up as a red X and you are expected to fix the problem. If you do not understand what the issue is or -have other problems to fix the complaint, just ask and other project members -can likely help out. +have other problems to fix the complaint, ask and other project members can +likely help out. Consider the following table while looking at pull request failures: @@ -46,7 +46,7 @@ Windows jobs have a number of flaky issues, most often, these: - test run crashing with fork errors. - steps past the test run exiting with -1073741502 (hex C0000142). -In these cases you can just try to update your pull requests to rerun the tests +In these cases you can try to update your pull requests to rerun the tests later as described below. A detailed overview of test runs and results can be found on diff --git a/docs/tests/FILEFORMAT.md b/docs/tests/FILEFORMAT.md index b946c7973c..bd42b1a639 100644 --- a/docs/tests/FILEFORMAT.md +++ b/docs/tests/FILEFORMAT.md @@ -165,7 +165,7 @@ Available substitute variables include: - `%FTP6PORT` - IPv6 port number of the FTP server - `%FTPPORT` - Port number of the FTP server - `%FTPSPORT` - Port number of the FTPS server -- `%FTPTIME2` - Timeout in seconds that should be just sufficient to receive a +- `%FTPTIME2` - Timeout in seconds that should be sufficient to receive a response from the test FTP server - `%GOPHER6PORT` - IPv6 port number of the Gopher server - `%GOPHERPORT` - Port number of the Gopher server @@ -401,7 +401,7 @@ issue. - `auth_required` if this is set and a POST/PUT is made without auth, the server does NOT wait for the full request body to get sent - `delay: [msecs]` - delay this amount after connection -- `idle` - do nothing after receiving the request, just "sit idle" +- `idle` - do nothing after receiving the request, "sit idle" - `stream` - continuously send data to the client, never-ending - `writedelay: [msecs]` delay this amount between reply packets - `skip: [num]` - instructs the server to ignore reading this many bytes from @@ -587,8 +587,7 @@ Set the given environment variables to the specified value before the actual command is run. They are restored back to their former values again after the command has been run. -If the variable name has no assignment, no `=`, then that variable is just -deleted. +If the variable name has no assignment, no `=`, then that variable is deleted. ### `` Command line to run. diff --git a/docs/tests/HTTP.md b/docs/tests/HTTP.md index 87ec815425..0470f47ef3 100644 --- a/docs/tests/HTTP.md +++ b/docs/tests/HTTP.md @@ -27,8 +27,8 @@ tests/http/test_01_basic.py ..... Pytest takes arguments. `-v` increases its verbosity and can be used several times. `-k ` can be used to run only matching test cases. The `expr` can -be something resembling a python test or just a string that needs to match -test cases in their names. +be something resembling a python test or a string that needs to match test +cases in their names. ```sh curl/tests/http> pytest -vv -k test_01_02 @@ -138,8 +138,8 @@ left behind. Tests making use of these fixtures have them in their parameter list. This tells pytest that a particular test needs them, so it has to create them. -Since one can invoke pytest for just a single test, it is important that a -test references the ones it needs. +Since one can invoke pytest for a single test, it is important that a test +references the ones it needs. All test cases start with `test_` in their name. We use a double number scheme to group them. This makes it ease to run only specific tests and also give a diff --git a/docs/tests/TEST-SUITE.md b/docs/tests/TEST-SUITE.md index dd0578ede0..37d0ba4e5e 100644 --- a/docs/tests/TEST-SUITE.md +++ b/docs/tests/TEST-SUITE.md @@ -194,8 +194,8 @@ ensure that the memory log file is properly written even if curl crashes. If a test case fails, you can conveniently get the script to invoke the debugger (gdb) for you with the server running and the same command line -parameters that failed. Just invoke `runtests.pl -g` and then -just type 'run' in the debugger to perform the command through the debugger. +parameters that failed. Simply invoke `runtests.pl -g` and then +type 'run' in the debugger to perform the command through the debugger. ### Logs @@ -286,9 +286,9 @@ Each test has a master file that controls all the test data. What to read, what the protocol exchange should look like, what exit code to expect and what command line arguments to use etc. -These files are `tests/data/test[num]` where `[num]` is just a unique -identifier described above, and the XML-like file format of them is -described in the separate [`FILEFORMAT`](FILEFORMAT.md) document. +These files are `tests/data/test[num]` where `[num]` is a unique identifier +described above, and the XML-like file format of them is described in the +separate [`FILEFORMAT`](FILEFORMAT.md) document. ### curl tests diff --git a/docs/wcurl.md b/docs/wcurl.md index 6b214e7e39..4d311bd689 100644 --- a/docs/wcurl.md +++ b/docs/wcurl.md @@ -35,8 +35,8 @@ Simply call **wcurl** with a list of URLs you want to download and **wcurl** picks sane defaults. If you need anything more complex, you can provide any of curl's supported -parameters via the **--curl-options** option. Just beware that you likely -should be using curl directly if your use case is not covered. +parameters via the **--curl-options** option. Beware that you likely should be +using curl directly if your use case is not covered. By default, **wcurl** does: @@ -92,7 +92,7 @@ URL was done by **wcurl**, e.g.: The URL contained whitespace. ## --dry-run -Do not actually execute curl, just print what would be invoked. +Do not actually execute curl, print what would be invoked. ## -V, \--version diff --git a/include/curl/curl.h b/include/curl/curl.h index 632333d799..827fb1badf 100644 --- a/include/curl/curl.h +++ b/include/curl/curl.h @@ -820,7 +820,7 @@ typedef enum { * CURLAUTH_NTLM_WB - HTTP NTLM authentication delegated to winbind helper * CURLAUTH_BEARER - HTTP Bearer token authentication * CURLAUTH_ONLY - Use together with a single other type to force no - * authentication or just that single type + * authentication or that single type * CURLAUTH_ANY - All fine types set * CURLAUTH_ANYSAFE - All fine types except Basic */ @@ -2124,7 +2124,7 @@ typedef enum { /* Specify URL using CURL URL API. */ CURLOPT(CURLOPT_CURLU, CURLOPTTYPE_OBJECTPOINT, 282), - /* add trailing data just after no more data is available */ + /* add trailing data after no more data is available */ CURLOPT(CURLOPT_TRAILERFUNCTION, CURLOPTTYPE_FUNCTIONPOINT, 283), /* pointer to be passed to HTTP_TRAILER_FUNCTION */ @@ -2356,8 +2356,8 @@ typedef enum { Unless one is set programmatically, the .netrc will be queried. */ enum CURL_NETRC_OPTION { - /* we set a single member here, just to make sure we still provide the enum, - but the values to use are defined above with L suffixes */ + /* we set a single member here, to make sure we still provide the enum, but + the values to use are defined above with L suffixes */ CURL_NETRC_LAST = 3 }; @@ -2386,7 +2386,7 @@ enum CURL_NETRC_OPTION { #define CURL_TLSAUTH_SRP 1L enum CURL_TLSAUTH { - /* we set a single member here, just to make sure we still provide the enum, + /* we set a single member here, to make sure we still provide the enum, but the values to use are defined above with L suffixes */ CURL_TLSAUTH_LAST = 2 }; @@ -2409,7 +2409,7 @@ enum CURL_TLSAUTH { #define CURL_TIMECOND_LASTMOD 3L typedef enum { - /* we set a single member here, just to make sure we still provide + /* we set a single member here, to make sure we still provide the enum typedef, but the values to use are defined above with L suffixes */ CURL_TIMECOND_LAST = 4 @@ -3024,9 +3024,8 @@ typedef enum { /* Different data locks for a single share */ typedef enum { CURL_LOCK_DATA_NONE = 0, - /* CURL_LOCK_DATA_SHARE is used internally to say that - * the locking is just made to change the internal state of the share - * itself. + /* CURL_LOCK_DATA_SHARE is used internally to say that the locking is made + * to change the internal state of the share itself. */ CURL_LOCK_DATA_SHARE, CURL_LOCK_DATA_COOKIE, diff --git a/include/curl/easy.h b/include/curl/easy.h index 5b3cdbd64e..0be6915d92 100644 --- a/include/curl/easy.h +++ b/include/curl/easy.h @@ -78,7 +78,7 @@ CURL_EXTERN CURL *curl_easy_duphandle(CURL *curl); * DESCRIPTION * * Re-initializes a curl handle to the default values. This puts back the - * handle to the same state as it was in when it was just created. + * handle to the same state as it was in when it was created. * * It does keep: live connections, the Session ID cache, the DNS cache and the * cookies. diff --git a/include/curl/multi.h b/include/curl/multi.h index 6c098e5a0c..ad6f53f3e2 100644 --- a/include/curl/multi.h +++ b/include/curl/multi.h @@ -77,9 +77,8 @@ typedef enum { CURLM_LAST } CURLMcode; -/* just to make code nicer when using curl_multi_socket() you can now check - for CURLM_CALL_MULTI_SOCKET too in the same style it works for - curl_multi_perform() and CURLM_CALL_MULTI_PERFORM */ +/* You can check for CURLM_CALL_MULTI_SOCKET too in the same style it works + for curl_multi_perform() and CURLM_CALL_MULTI_PERFORM */ #define CURLM_CALL_MULTI_SOCKET CURLM_CALL_MULTI_PERFORM /* bitmask bits for CURLMOPT_PIPELINING */ @@ -201,13 +200,13 @@ CURL_EXTERN CURLMcode curl_multi_wakeup(CURLM *multi_handle); /* * Name: curl_multi_perform() * - * Desc: When the app thinks there is data available for curl it calls this + * Desc: When the app thinks there is data available for curl it calls this * function to read/write whatever there is right now. This returns * as soon as the reads and writes are done. This function does not * require that there actually is data available for reading or that - * data can be written, it can be called just in case. It returns - * the number of handles that still transfer data in the second - * argument's integer-pointer. + * data can be written, it can be called. It returns the number of + * handles that still transfer data in the second argument's + * integer-pointer. * * Returns: CURLMcode type, general multi error code. *NOTE* that this only * returns errors etc regarding the whole multi stack. There might @@ -234,7 +233,7 @@ CURL_EXTERN CURLMcode curl_multi_cleanup(CURLM *multi_handle); * * Desc: Ask the multi handle if there is any messages/informationals from * the individual transfers. Messages include informationals such as - * error code from the transfer or just the fact that a transfer is + * error code from the transfer or the fact that a transfer is * completed. More details on these should be written down as well. * * Repeated calls to this function will return a new struct each @@ -515,7 +514,7 @@ typedef int (*curl_push_callback)(CURL *parent, * * Desc: Ask curl for fds for polling. The app can use these to poll on. * We want curl_multi_perform() called as soon as one of them are - * ready. Passing zero size allows to get just a number of fds. + * ready. Passing zero size allows to get a number of fds. * * Returns: CURLMcode type, general multi error code. */ diff --git a/include/curl/typecheck-gcc.h b/include/curl/typecheck-gcc.h index 3ac3182c50..d9a672ea0b 100644 --- a/include/curl/typecheck-gcc.h +++ b/include/curl/typecheck-gcc.h @@ -38,7 +38,7 @@ * when compiling with -Wlogical-op. * * To add an option that uses the same type as an existing option, you will - * just need to extend the appropriate _curl_*_option macro + * need to extend the appropriate _curl_*_option macro */ #define curl_easy_setopt(handle, option, value) \ @@ -260,7 +260,7 @@ curlcheck_cb_compatible((expr), curl_notify_callback)) /* - * For now, just make sure that the functions are called with three arguments + * Make sure that the functions are called with three arguments */ #define curl_share_setopt(share, opt, param) \ (curl_share_setopt)(share, opt, param) diff --git a/lib/CMakeLists.txt b/lib/CMakeLists.txt index a9307f8011..f2bf47c67d 100644 --- a/lib/CMakeLists.txt +++ b/lib/CMakeLists.txt @@ -42,7 +42,7 @@ set_property(DIRECTORY APPEND PROPERTY INCLUDE_DIRECTORIES ) if(CURL_BUILD_TESTING) - # special libcurlu library just for unittests + # special libcurlu library for unittests add_library(curlu STATIC EXCLUDE_FROM_ALL ${HHEADERS} ${CSOURCES}) target_compile_definitions(curlu PUBLIC "CURL_STATICLIB" "UNITTESTS") target_link_libraries(curlu PUBLIC ${CURL_LIBS}) diff --git a/lib/altsvc.c b/lib/altsvc.c index 78d2a5083a..34da3e1ac8 100644 --- a/lib/altsvc.c +++ b/lib/altsvc.c @@ -175,7 +175,7 @@ static CURLcode altsvc_add(struct altsvcinfo *asi, const char *line) (size_t)srcport, (size_t)dstport); if(as) { as->expires = expires; - as->prio = 0; /* not supported to just set zero */ + as->prio = 0; /* not supported, set zero */ as->persist = persist ? 1 : 0; Curl_llist_append(&asi->list, as, &as->node); } diff --git a/lib/arpa_telnet.h b/lib/arpa_telnet.h index 329f4bcd1c..b5faab419c 100644 --- a/lib/arpa_telnet.h +++ b/lib/arpa_telnet.h @@ -28,7 +28,7 @@ * Telnet option defines. Add more here if in need. */ #define CURL_TELOPT_BINARY 0 /* binary 8-bit data */ -#define CURL_TELOPT_ECHO 1 /* just echo! */ +#define CURL_TELOPT_ECHO 1 /* echo */ #define CURL_TELOPT_SGA 3 /* Suppress Go Ahead */ #define CURL_TELOPT_EXOPL 255 /* EXtended OPtions List */ #define CURL_TELOPT_TTYPE 24 /* Terminal TYPE */ diff --git a/lib/asyn-ares.c b/lib/asyn-ares.c index 2d710f56da..bbf456d1fe 100644 --- a/lib/asyn-ares.c +++ b/lib/asyn-ares.c @@ -416,8 +416,8 @@ CURLcode Curl_async_await(struct Curl_easy *data, real_timeout = ares_timeout(ares->channel, &max_timeout, &time_buf); /* use the timeout period ares returned to us above if less than one - second is left, otherwise just use 1000ms to make sure the progress - callback gets called frequent enough */ + second is left, otherwise use 1000ms to make sure the progress callback + gets called frequent enough */ if(!real_timeout->tv_sec) call_timeout_ms = (timediff_t)(real_timeout->tv_usec / 1000); else @@ -544,12 +544,11 @@ static void async_ares_hostbyname_cb(void *user_data, talking to a pool of DNS servers that can only successfully resolve IPv4 address, for example). - it is also possible that the other request could always just take - longer because it needs more time or only the second DNS server can - fulfill it successfully. But, to align with the philosophy of Happy - Eyeballs, we do not want to wait _too_ long or users will think - requests are slow when IPv6 lookups do not actually work (but IPv4 - ones do). + it is also possible that the other request could always take longer + because it needs more time or only the second DNS server can fulfill it + successfully. But, to align with the philosophy of Happy Eyeballs, we + do not want to wait _too_ long or users will think requests are slow + when IPv6 lookups do not actually work (but IPv4 ones do). So, now that we have a usable answer (some IPv4 addresses, some IPv6 addresses, or "no such domain"), we start a timeout for the remaining @@ -564,17 +563,16 @@ static void async_ares_hostbyname_cb(void *user_data, us that, given usable information in hand, we simply do not want to wait "too much longer" after we get a result. - We simply wait an additional amount of time equal to the default - c-ares query timeout. That is enough time for a typical parallel - response to arrive without being "too long". Even on a network - where one of the two types of queries is failing or timing out - constantly, this will usually mean we wait a total of the default - c-ares timeout (5 seconds) plus the round trip time for the successful - request, which seems bearable. The downside is that c-ares might race - with us to issue one more retry just before we give up, but it seems - better to "waste" that request instead of trying to guess the perfect - timeout to prevent it. After all, we do not even know where in the - c-ares retry cycle each request is. + We simply wait an additional amount of time equal to the default c-ares + query timeout. That is enough time for a typical parallel response to + arrive without being "too long". Even on a network where one of the two + types of queries is failing or timing out constantly, this will usually + mean we wait a total of the default c-ares timeout (5 seconds) plus the + round trip time for the successful request, which seems bearable. The + downside is that c-ares might race with us to issue one more retry + before we give up, but it seems better to "waste" that request instead + of trying to guess the perfect timeout to prevent it. After all, we do + not even know where in the c-ares retry cycle each request is. */ ares->happy_eyeballs_dns_time = *Curl_pgrs_now(data); Curl_expire(data, HAPPY_EYEBALLS_DNS_TIMEOUT, EXPIRE_HAPPY_EYEBALLS_DNS); @@ -845,7 +843,7 @@ static CURLcode async_ares_set_dns_servers(struct Curl_easy *data, } #ifdef HAVE_CARES_SERVERS_CSV - /* if channel is not there, this is just a parameter check */ + /* if channel is not there, this is a parameter check */ if(ares->channel) #ifdef HAVE_CARES_PORTS_CSV ares_result = ares_set_servers_ports_csv(ares->channel, servers); @@ -888,7 +886,7 @@ CURLcode Curl_async_ares_set_dns_interface(struct Curl_easy *data) if(!interf) interf = ""; - /* if channel is not there, this is just a parameter check */ + /* if channel is not there, this is a parameter check */ if(ares->channel) ares_set_local_dev(ares->channel, interf); @@ -917,7 +915,7 @@ CURLcode Curl_async_ares_set_dns_local_ip4(struct Curl_easy *data) } } - /* if channel is not there yet, this is just a parameter check */ + /* if channel is not there yet, this is a parameter check */ if(ares->channel) ares_set_local_ip4(ares->channel, ntohl(a4.s_addr)); @@ -947,7 +945,7 @@ CURLcode Curl_async_ares_set_dns_local_ip6(struct Curl_easy *data) } } - /* if channel is not there, this is just a parameter check */ + /* if channel is not there, this is a parameter check */ if(ares->channel) ares_set_local_ip6(ares->channel, a6); diff --git a/lib/bufq.h b/lib/bufq.h index e3fbc390d2..638f4c0a64 100644 --- a/lib/bufq.h +++ b/lib/bufq.h @@ -197,9 +197,8 @@ bool Curl_bufq_peek_at(struct bufq *q, size_t offset, const uint8_t **pbuf, size_t *plen); /** - * Tell the buffer queue to discard `amount` buf bytes at the head - * of the queue. Skipping more buf than is currently buffered will - * just empty the queue. + * Tell the buffer queue to discard `amount` buf bytes at the head of the + * queue. Skipping more buf than is currently buffered will empty the queue. */ void Curl_bufq_skip(struct bufq *q, size_t amount); diff --git a/lib/cf-h1-proxy.c b/lib/cf-h1-proxy.c index c51ae26d9e..79b50a1c57 100644 --- a/lib/cf-h1-proxy.c +++ b/lib/cf-h1-proxy.c @@ -206,9 +206,8 @@ static CURLcode start_CONNECT(struct Curl_cfilter *cf, int http_minor; CURLcode result; - /* This only happens if we have looped here due to authentication - reasons, and we do not really use the newly cloned URL here - then. Just free it. */ + /* This only happens if we have looped here due to authentication reasons, + and we do not really use the newly cloned URL here then. Free it. */ Curl_safefree(data->req.newurl); result = Curl_http_proxy_create_CONNECT(&req, cf, data, 1); diff --git a/lib/cf-socket.c b/lib/cf-socket.c index 6cec4b4a49..feaf2187cb 100644 --- a/lib/cf-socket.c +++ b/lib/cf-socket.c @@ -580,7 +580,7 @@ static CURLcode bindlocal(struct Curl_easy *data, struct connectdata *conn, * This binds the local socket to a particular interface. This will * force even requests to other local interfaces to go out the external * interface. Only bind to the interface when specified as interface, - * not just as a hostname or ip address. + * not as a hostname or ip address. * * The interface might be a VRF, eg: vrf-blue, which means it cannot be * converted to an IP address and would fail Curl_if2ip. Simply try to @@ -798,7 +798,7 @@ static bool verifyconnect(curl_socket_t sockfd, int *error) * * "I do not have Rational Quantify, but the hint from his post was * ntdll::NtRemoveIoCompletion(). I would assume the SleepEx (or maybe - * just Sleep(0) would be enough?) would release whatever + * Sleep(0) would be enough?) would release whatever * mutex/critical-section the ntdll call is waiting on. * * Someone got to verify this on Win-NT 4.0, 2000." @@ -1445,7 +1445,7 @@ static CURLcode cf_socket_send(struct Curl_cfilter *cf, struct Curl_easy *data, (SOCKEINPROGRESS == sockerr) #endif ) { - /* this is just a case of EWOULDBLOCK */ + /* EWOULDBLOCK */ result = CURLE_AGAIN; } else { @@ -1510,7 +1510,7 @@ static CURLcode cf_socket_recv(struct Curl_cfilter *cf, struct Curl_easy *data, (EAGAIN == sockerr) || (SOCKEINTR == sockerr) #endif ) { - /* this is just a case of EWOULDBLOCK */ + /* EWOULDBLOCK */ result = CURLE_AGAIN; } else { diff --git a/lib/content_encoding.c b/lib/content_encoding.c index 8878d8ee68..a79e00c617 100644 --- a/lib/content_encoding.c +++ b/lib/content_encoding.c @@ -193,7 +193,7 @@ static CURLcode inflate_stream(struct Curl_easy *data, done = FALSE; break; case Z_BUF_ERROR: - /* No more data to flush: just exit loop. */ + /* No more data to flush: exit loop. */ break; case Z_STREAM_END: result = process_trailer(data, zp); diff --git a/lib/cookie.c b/lib/cookie.c index 819b8b1027..b2e809b21c 100644 --- a/lib/cookie.c +++ b/lib/cookie.c @@ -366,8 +366,7 @@ static bool invalid_octets(const char *ptr, size_t len) /* The maximum length we accept a date string for the 'expire' keyword. The standard date formats are within the 30 bytes range. This adds an extra - margin just to make sure it realistically works with what is used out - there. + margin to make sure it realistically works with what is used out there. */ #define MAX_DATE_LENGTH 80 @@ -1314,8 +1313,8 @@ CURLcode Curl_cookie_getlist(struct Curl_easy *data, if(matches) { /* * Now we need to make sure that if there is a name appearing more than - * once, the longest specified path version comes first. To make this - * the swiftest way, we just sort them all based on path length. + * once, the longest specified path version comes first. To make this the + * swiftest way, we sort them all based on path length. */ struct Cookie **array; size_t i; diff --git a/lib/cshutdn.c b/lib/cshutdn.c index 518a099069..308f2aaa82 100644 --- a/lib/cshutdn.c +++ b/lib/cshutdn.c @@ -331,7 +331,7 @@ void Curl_cshutdn_destroy(struct cshutdn *cshutdn, { if(cshutdn->initialised && data) { int timeout_ms = 0; - /* Just for testing, run graceful shutdown */ + /* for testing, run graceful shutdown */ #ifdef DEBUGBUILD { const char *p = getenv("CURL_GRACEFUL_SHUTDOWN"); diff --git a/lib/curl_range.c b/lib/curl_range.c index 070dee4289..9bbafa40cf 100644 --- a/lib/curl_range.c +++ b/lib/curl_range.c @@ -54,7 +54,7 @@ CURLcode Curl_range(struct Curl_easy *data) else if(!first_num) { /* -Y */ if(!to) - /* "-0" is just wrong */ + /* "-0" is wrong */ return CURLE_RANGE_ERROR; data->req.maxdownload = to; diff --git a/lib/curl_setup.h b/lib/curl_setup.h index 6e65697408..4f8d65af8b 100644 --- a/lib/curl_setup.h +++ b/lib/curl_setup.h @@ -708,7 +708,7 @@ #elif defined(USE_ARES) # define CURLRES_ASYNCH # define CURLRES_ARES -/* now undef the stock libc functions just to avoid them being used */ +/* now undef the stock libc functions to avoid them being used */ # undef HAVE_GETADDRINFO # undef HAVE_FREEADDRINFO #else diff --git a/lib/curl_sha512_256.c b/lib/curl_sha512_256.c index 00f4d1071f..648e3e38e9 100644 --- a/lib/curl_sha512_256.c +++ b/lib/curl_sha512_256.c @@ -53,7 +53,7 @@ * The bug was fixed in NetBSD 9.4 release, NetBSD 10.0 release, * NetBSD 10.99.11 development. * It is safe to apply the workaround even if the bug is not present, as - * the workaround just reduces performance slightly. */ + * the workaround reduces performance slightly. */ # include # if __NetBSD_Version__ < 904000000 || \ (__NetBSD_Version__ >= 999000000 && \ diff --git a/lib/curlx/wait.c b/lib/curlx/wait.c index eaf454c122..52784269ef 100644 --- a/lib/curlx/wait.c +++ b/lib/curlx/wait.c @@ -43,13 +43,13 @@ /* * Internal function used for waiting a specific amount of ms in * Curl_socket_check() and Curl_poll() when no file descriptor is provided to - * wait on, just being used to delay execution. Winsock select() and poll() - * timeout mechanisms need a valid socket descriptor in a not null file - * descriptor set to work. Waiting indefinitely with this function is not - * allowed, a zero or negative timeout value will return immediately. Timeout - * resolution, accuracy, as well as maximum supported value is system - * dependent, neither factor is a critical issue for the intended use of this - * function in the library. + * wait on, being used to delay execution. Winsock select() and poll() timeout + * mechanisms need a valid socket descriptor in a not null file descriptor set + * to work. Waiting indefinitely with this function is not allowed, a zero or + * negative timeout value will return immediately. Timeout resolution, + * accuracy, as well as maximum supported value is system dependent, neither + * factor is a critical issue for the intended use of this function in the + * library. * * Return values: * -1 = system call error, or invalid timeout value diff --git a/lib/cw-out.c b/lib/cw-out.c index ab38109ad9..554c32e84e 100644 --- a/lib/cw-out.c +++ b/lib/cw-out.c @@ -195,8 +195,8 @@ static CURLcode cw_out_cb_write(struct cw_out_ctx *ctx, if(nwritten == CURL_WRITEFUNC_PAUSE) { if(data->conn->scheme->flags & PROTOPT_NONETWORK) { /* Protocols that work without network cannot be paused. This is - actually only FILE:// just now, and it cannot pause since the - transfer is not done using the "normal" procedure. */ + actually only FILE:// now, and it cannot pause since the transfer is + not done using the "normal" procedure. */ failf(data, "Write callback asked for PAUSE when not supported"); return CURLE_WRITE_ERROR; } diff --git a/lib/doh.c b/lib/doh.c index 7fe4a06058..4488d422b1 100644 --- a/lib/doh.c +++ b/lib/doh.c @@ -689,10 +689,10 @@ static DOHcode doh_rdata(const unsigned char *doh, return rc; break; case CURL_DNS_TYPE_DNAME: - /* explicit for clarity; just skip; rely on synthesized CNAME */ + /* explicit for clarity; skip; rely on synthesized CNAME */ break; default: - /* unsupported type, just skip it */ + /* unsupported type, skip it */ break; } return DOH_OK; @@ -1048,9 +1048,9 @@ UNITTEST void de_cleanup(struct dohentry *d) * The encoding here is defined in * https://datatracker.ietf.org/doc/html/rfc1035#section-3.1 * - * The input buffer pointer will be modified so it points to - * just after the end of the DNS name encoding on output. (And - * that is why it is an "unsigned char **" :-) + * The input buffer pointer will be modified so it points to after the end of + * the DNS name encoding on output. (And that is why it is an "unsigned char + * **" :-) */ static CURLcode doh_decode_rdata_name(const unsigned char **buf, size_t *remaining, char **dnsname) diff --git a/lib/dynhds.h b/lib/dynhds.h index ee728d1c3e..e30eb45725 100644 --- a/lib/dynhds.h +++ b/lib/dynhds.h @@ -153,16 +153,14 @@ CURLcode Curl_dynhds_cadd(struct dynhds *dynhds, const char *name, const char *value); /** - * Add a single header from an HTTP/1.1 formatted line at the end. Line - * may contain a delimiting CRLF or just LF. Any characters after - * that will be ignored. + * Add a single header from an HTTP/1.1 formatted line at the end. Line may + * contain a delimiting CRLF or LF. Any characters after that will be ignored. */ CURLcode Curl_dynhds_h1_cadd_line(struct dynhds *dynhds, const char *line); /** - * Add a single header from an HTTP/1.1 formatted line at the end. Line - * may contain a delimiting CRLF or just LF. Any characters after - * that will be ignored. + * Add a single header from an HTTP/1.1 formatted line at the end. Line may + * contain a delimiting CRLF or LF. Any characters after that will be ignored. */ CURLcode Curl_dynhds_h1_add_line(struct dynhds *dynhds, const char *line, size_t line_len); diff --git a/lib/easy.c b/lib/easy.c index b30ce2ace4..d05674e819 100644 --- a/lib/easy.c +++ b/lib/easy.c @@ -724,7 +724,7 @@ static CURLcode easy_transfer(struct Curl_multi *multi) * easy handle, destroys the multi handle and returns the easy handle's return * code. * - * REALITY: it cannot just create and destroy the multi handle that easily. It + * REALITY: it cannot create and destroy the multi handle that easily. It * needs to keep it around since if this easy handle is used again by this * function, the same multi handle must be reused so that the same pools and * caches can be used. diff --git a/lib/formdata.c b/lib/formdata.c index b5bd1206f1..3770d28b18 100644 --- a/lib/formdata.c +++ b/lib/formdata.c @@ -168,10 +168,9 @@ static void free_formlist(struct FormInfo *ptr) * * Stores a formpost parameter and builds the appropriate linked list. * - * Has two principal functionalities: using files and byte arrays as - * post parts. Byte arrays are either copied or just the pointer is stored - * (as the user requests) while for files only the filename and not the - * content is stored. + * Has two principal functionalities: using files and byte arrays as post + * parts. Byte arrays are either copied or the pointer is stored (as the user + * requests) while for files only the filename and not the content is stored. * * While you may have only one byte array for each name, multiple filenames * are allowed (and because of this feature CURLFORM_END is needed after @@ -667,7 +666,7 @@ void curl_formfree(struct curl_httppost *form) struct curl_httppost *next; if(!form) - /* no form to free, just get out of this */ + /* no form to free, get out of this */ return; do { @@ -710,8 +709,8 @@ static CURLcode setname(curl_mimepart *part, const char *name, size_t len) * mime part at '*finalform'. * * This function will not do a failf() for the potential memory failures but - * should for all other errors it spots. Just note that this function MAY get - * a NULL pointer in the 'data' argument. + * should for all other errors it spots. Note that this function MAY get a + * NULL pointer in the 'data' argument. */ CURLcode Curl_getformdata(CURL *data, diff --git a/lib/ftp.c b/lib/ftp.c index 4c98d86f23..b50d9976c3 100644 --- a/lib/ftp.c +++ b/lib/ftp.c @@ -361,7 +361,7 @@ static void close_secondarysocket(struct Curl_easy *data, /* * Lineend Conversions * On ASCII transfers, e.g. directory listings, we might get lines - * ending in '\r\n' and we prefer just '\n'. + * ending in '\r\n' and we prefer '\n'. * We might also get a lonely '\r' which we convert into a '\n'. */ struct ftp_cw_lc_ctx { @@ -399,8 +399,8 @@ static CURLcode ftp_cw_lc_write(struct Curl_easy *data, if(result) return result; } - /* either we just wrote the newline or it is part of the next - * chunk of bytes we write. */ + /* either we wrote the newline or it is part of the next chunk of bytes + * we write. */ ctx->newline_pending = FALSE; } @@ -633,8 +633,8 @@ static CURLcode getftpresponse(struct Curl_easy *data, int *ftpcodep) /* return the ftp-code */ { /* - * We cannot read just one byte per read() and then go back to select() as - * the OpenSSL read() does not grok that properly. + * We cannot read one byte per read() and then go back to select() as the + * OpenSSL read() does not grok that properly. * * Alas, read as much as possible, split up into lines, use the ending * line in a response or continue reading. */ @@ -676,10 +676,10 @@ static CURLcode getftpresponse(struct Curl_easy *data, * * A caution here is that the ftp_readresp() function has a cache that may * contain pieces of a response from the previous invoke and we need to - * make sure we do not just wait for input while there is unhandled data in + * make sure we do not wait for input while there is unhandled data in * that cache. But also, if the cache is there, we call ftp_readresp() and - * the cache was not good enough to continue we must not just busy-loop - * around this function. + * the cache was not good enough to continue we must not busy-loop around + * this function. * */ @@ -702,7 +702,7 @@ static CURLcode getftpresponse(struct Curl_easy *data, } else if(ev == 0) { result = Curl_pgrsUpdate(data); - continue; /* just continue in our loop for the timeout duration */ + continue; /* continue in our loop for the timeout duration */ } } @@ -782,8 +782,8 @@ static CURLcode ftp_domore_pollset(struct Curl_easy *data, return CURLE_OK; /* When in DO_MORE state, we could be either waiting for us to connect to a - * remote site, or we could wait for that site to connect to us. Or just - * handle ordinary commands. + * remote site, or we could wait for that site to connect to us. Or handle + * ordinary commands. */ CURL_TRC_FTP(data, "[%s] ftp_domore_pollset()", FTP_CSTATE(ftpc)); @@ -1552,7 +1552,7 @@ static CURLcode ftp_state_list(struct Curl_easy *data, Whether the server will support this, is uncertain. The other ftp_filemethods will CWD into dir/dir/ first and - then just do LIST (in that case: nothing to do here) + then do LIST (in that case: nothing to do here) */ const char *lstArg = NULL; int lstArglen = 0; @@ -1688,9 +1688,9 @@ static CURLcode ftp_state_ul_setup(struct Curl_easy *data, which may not exist in the server! The SIZE command is not in RFC959. */ - /* 2. This used to set REST. But since we can do append, we - do not another ftp command. We just skip the source file - offset and then we APPEND the rest on the file instead */ + /* 2. This used to set REST. But since we can do append, we issue no + another ftp command. Skip the source file offset and APPEND the rest on + the file instead */ /* 3. pass file-size number of bytes in the source file */ /* 4. lower the infilesize counter */ @@ -1791,10 +1791,10 @@ static CURLcode ftp_state_retr(struct Curl_easy *data, this even when not doing resumes. */ if(filesize == -1) { infof(data, "ftp server does not support SIZE"); - /* We could not get the size and therefore we cannot know if there really - is a part of the file left to get, although the server will just - close the connection when we start the connection so it will not cause - us any harm, just not make us exit as nicely. */ + /* We could not get the size and therefore we cannot know if there + really is a part of the file left to get, although the server will + close the connection when we start the connection so it will not + cause us any harm, not make us exit as nicely. */ } else { /* We got a file size report, so we check that there actually is a @@ -2392,7 +2392,7 @@ static CURLcode ftp_do_more(struct Curl_easy *data, int *completep) if(result) return result; } - /* otherwise just fall through */ + /* otherwise fall through */ } else { if(data->set.prequote && !ftpc->file) { @@ -2971,7 +2971,7 @@ static CURLcode ftp_state_user_resp(struct Curl_easy *data, { CURLcode result = CURLE_OK; - /* some need password anyway, and others just return 2xx ignored */ + /* some need password anyway, and others return 2xx ignored */ if((ftpcode == 331) && (ftpc->state == FTP_USER)) { /* 331 Password required for ... (the server requires to send the user's password too) */ @@ -3742,9 +3742,9 @@ static CURLcode ftp_done(struct Curl_easy *data, CURLcode status, if(!result && (ftp->transfer == PPTRANSFER_BODY) && ftpc->ctl_valid && pp->pending_resp && !premature) { /* - * Let's see what the server says about the transfer we just performed, - * but lower the timeout as sometimes this connection has died while the - * data has been transferred. This happens when doing through NATs etc that + * Let's see what the server says about the transfer we performed, but + * lower the timeout as sometimes this connection has died while the data + * has been transferred. This happens when doing through NATs etc that * abandon old silent connections. */ pp->response = *Curl_pgrs_now(data); /* timeout relative now */ @@ -3760,7 +3760,7 @@ static CURLcode ftp_done(struct Curl_easy *data, CURLcode status, return result; if(ftpc->dont_check && data->req.maxdownload > 0) { - /* we have just sent ABOR and there is no reliable way to check if it was + /* we have sent ABOR and there is no reliable way to check if it was * successful or not; we have to close the connection now */ infof(data, "partial download completed, closing connection"); connclose(conn, "Partial download with no ability to check"); @@ -4303,7 +4303,7 @@ static CURLcode ftp_disconnect(struct Curl_easy *data, disconnect wait in vain and cause more problems than we need to. ftp_quit() will check the state of ftp->ctl_valid. If it is ok it - will try to send the QUIT command, otherwise it will just return. + will try to send the QUIT command, otherwise it will return. */ ftpc->shutdown = TRUE; if(dead_connection || Curl_pp_needs_flush(data, &ftpc->pp)) diff --git a/lib/ftp.h b/lib/ftp.h index ef1aacb40c..20b360033e 100644 --- a/lib/ftp.h +++ b/lib/ftp.h @@ -107,8 +107,8 @@ struct FTP { char *path; /* points to the urlpieces struct field */ char *pathalloc; /* if non-NULL a pointer to an allocated path */ - /* transfer a file/body or not, done as a typedefed enum just to make - debuggers display the full symbol and not just the numerical value */ + /* transfer a file/body or not, done as a typedefed enum to make debuggers + display the full symbol and not the numerical value */ curl_pp_transfer transfer; curl_off_t downloadsize; }; @@ -151,7 +151,7 @@ struct ftp_conn { BIT(ftp_trying_alternative); BIT(dont_check); /* Set to TRUE to prevent the final (post-transfer) file size and 226/250 status check. It should still - read the line, just ignore the result. */ + read the line, ignore the result. */ BIT(ctl_valid); /* Tells Curl_ftp_quit() whether or not to do anything. If the connection has timed out or been closed, this should be FALSE when it gets to Curl_ftp_quit() */ diff --git a/lib/hostip.c b/lib/hostip.c index 597c77173c..542b655b11 100644 --- a/lib/hostip.c +++ b/lib/hostip.c @@ -1366,7 +1366,7 @@ CURLcode Curl_loadhostpairs(struct Curl_easy *data) if(curlx_str_until(&host, &target, 4096, ',')) { if(curlx_str_single(&host, ',')) goto err; - /* survive nothing but just a comma */ + /* survive nothing but a comma */ continue; } } diff --git a/lib/hsts.c b/lib/hsts.c index 8413ea7fed..03f51f8109 100644 --- a/lib/hsts.c +++ b/lib/hsts.c @@ -206,7 +206,7 @@ CURLcode Curl_hsts_parse(struct hsts *h, const char *hostname, /* check if it already exists */ sts = Curl_hsts(h, hostname, hlen, FALSE); if(sts) { - /* just update these fields */ + /* update these fields */ sts->expires = expires; sts->includeSubDomains = subdomains; } @@ -456,7 +456,7 @@ static CURLcode hsts_pull(struct Curl_easy *data, struct hsts *h) e.namelen = sizeof(buffer) - 1; e.includeSubDomains = FALSE; /* default */ e.expire[0] = 0; - e.name[0] = 0; /* just to make it clean */ + e.name[0] = 0; /* to make it clean */ sc = data->set.hsts_read(data, &e, data->set.hsts_read_userp); if(sc == CURLSTS_OK) { time_t expires = 0; diff --git a/lib/http.c b/lib/http.c index e9ef131f8e..9de27ba629 100644 --- a/lib/http.c +++ b/lib/http.c @@ -394,7 +394,7 @@ static CURLcode http_perhapsrewind(struct Curl_easy *data, VERBOSE(const char *ongoing_auth = NULL); /* We need a rewind before uploading client read data again. The - * checks below just influence of the upload is to be continued + * checks below influence of the upload is to be continued * or aborted early. * This depends on how much remains to be sent and in what state * the authentication is. Some auth schemes such as NTLM do not work @@ -1195,7 +1195,7 @@ CURLcode Curl_http_follow(struct Curl_easy *data, const char *newurl, } /* the URL could not be parsed for some reason, but since this is FAKE - mode, just duplicate the field as-is */ + mode, duplicate the field as-is */ follow_url = curlx_strdup(newurl); if(!follow_url) return CURLE_OUT_OF_MEMORY; @@ -2418,7 +2418,7 @@ static CURLcode addexpect(struct Curl_easy *data, struct dynbuf *r, return CURLE_OK; /* For really small puts we do not use Expect: headers at all, and for - the somewhat bigger ones we allow the app to disable it. Just make + the somewhat bigger ones we allow the app to disable it. Make sure that the expect100header is always set to the preferred value here. */ ptr = Curl_checkheaders(data, STRCONST("Expect")); @@ -2641,8 +2641,8 @@ static CURLcode http_range(struct Curl_easy *data, data->state.range, total_len - 1, total_len); } else { - /* Range was selected and then we just pass the incoming range and - append total size */ + /* Range was selected and then we pass the incoming range and append + total size */ data->state.aptr.rangeline = curl_maprintf("Content-Range: bytes %s/%" FMT_OFF_T "\r\n", data->state.range, req_clen); @@ -3278,7 +3278,7 @@ static CURLcode http_header_c(struct Curl_easy *data, return CURLE_OK; } } - /* negative, different value or just rubbish - bad HTTP */ + /* negative, different value or rubbish - bad HTTP */ failf(data, "Invalid Content-Length: value"); return CURLE_WEIRD_SERVER_REPLY; } @@ -3757,8 +3757,8 @@ static CURLcode http_statusline(struct Curl_easy *data, */ if(data->state.resume_from && data->state.httpreq == HTTPREQ_GET && k->httpcode == 416) { - /* "Requested Range Not Satisfiable", just proceed and - pretend this is no error */ + /* "Requested Range Not Satisfiable", proceed and pretend this is no + error */ k->ignorebody = TRUE; /* Avoid appending error msg to good data. */ } diff --git a/lib/http.h b/lib/http.h index 7076cd9c60..bb535955d1 100644 --- a/lib/http.h +++ b/lib/http.h @@ -39,8 +39,8 @@ typedef enum { /* When redirecting transfers. */ typedef enum { - FOLLOW_NONE, /* not used within the function, just a placeholder to - allow initing to this */ + FOLLOW_NONE, /* not used within the function, a placeholder to allow + initing to this */ FOLLOW_FAKE, /* only records stuff, not actually following */ FOLLOW_RETRY, /* set if this is a request retry as opposed to a real redirect following */ diff --git a/lib/http2.c b/lib/http2.c index 27f60b7656..0a511656d0 100644 --- a/lib/http2.c +++ b/lib/http2.c @@ -63,7 +63,7 @@ #define H2_CONN_WINDOW_SIZE (10 * 1024 * 1024) /* on receiving from TLS, we prep for holding a full stream window */ #define H2_NW_RECV_CHUNKS (H2_CONN_WINDOW_SIZE / H2_CHUNK_SIZE) -/* on send into TLS, we just want to accumulate small frames */ +/* on send into TLS, we want to accumulate small frames */ #define H2_NW_SEND_CHUNKS 1 /* this is how much we want "in flight" for a stream, unthrottled */ #define H2_STREAM_WINDOW_SIZE_MAX (10 * 1024 * 1024) @@ -525,8 +525,8 @@ static CURLcode h2_process_pending_input(struct Curl_cfilter *cf, } /* - * The server may send us data at any point (e.g. PING frames). Therefore, - * we cannot assume that an HTTP/2 socket is dead just because it is readable. + * The server may send us data at any point (e.g. PING frames). Therefore, we + * cannot assume that an HTTP/2 socket is dead because it is readable. * * Check the lower filters first and, if successful, peek at the socket * and distinguish between closed and data. @@ -677,7 +677,7 @@ char *curl_pushheader_byname(struct curl_pushheaders *h, const char *name) size_t i; /* Verify that we got a good easy handle in the push header struct, mostly to detect rubbish input fast(er). Also empty header name - is just a rubbish too. We have to allow ":" at the beginning of + is rubbish too. We have to allow ":" at the beginning of the header, but header == ":" must be rejected. If we have ':' in the middle of header, it could be matched in middle of the value, this is because we do prefix match.*/ diff --git a/lib/imap.c b/lib/imap.c index 8aa0afb8d2..0546595f89 100644 --- a/lib/imap.c +++ b/lib/imap.c @@ -1322,12 +1322,12 @@ static CURLcode imap_state_listsearch_resp(struct Curl_easy *data, imap_state(data, imapc, IMAP_STOP); } else { - /* Failed to parse literal, just write the line */ + /* Failed to parse literal, write the line */ result = Curl_client_write(data, CLIENTWRITE_BODY, line, len); } } else { - /* No literal, just write the line as-is */ + /* No literal, write the line as-is */ result = Curl_client_write(data, CLIENTWRITE_BODY, line, len); } } @@ -1455,7 +1455,7 @@ static CURLcode imap_state_fetch_resp(struct Curl_easy *data, infof(data, "Written %zu bytes, %" FMT_OFF_TU " bytes are left for transfer", chunk, size - chunk); - /* Have we used the entire overflow or just part of it?*/ + /* Have we used the entire overflow or part of it?*/ if(pp->overflow > chunk) { /* remember the remaining trailing overflow data */ pp->overflow -= chunk; diff --git a/lib/md4.c b/lib/md4.c index bf1dc0d25f..cf18026484 100644 --- a/lib/md4.c +++ b/lib/md4.c @@ -210,9 +210,8 @@ typedef struct md4_ctx MD4_CTX; * SET reads 4 input bytes in little-endian byte order and stores them * in a properly aligned word in host byte order. * - * The check for little-endian architectures that tolerate unaligned - * memory accesses is just an optimization. Nothing will break if it - * does not work. + * The check for little-endian architectures that tolerate unaligned memory + * accesses is an optimization. Nothing will break if it does not work. */ #if defined(__i386__) || defined(__x86_64__) || defined(__vax__) #define MD4_SET(n) (*(const uint32_t *)(const void *)&ptr[(n) * 4]) diff --git a/lib/md5.c b/lib/md5.c index e76863dc22..72b59c97d2 100644 --- a/lib/md5.c +++ b/lib/md5.c @@ -270,7 +270,7 @@ typedef struct md5_ctx my_md5_ctx; * The basic MD5 functions. * * F and G are optimized compared to their RFC 1321 definitions for - * architectures that lack an AND-NOT instruction, just like in Colin Plumb's + * architectures that lack an AND-NOT instruction, like in Colin Plumb's * implementation. */ #define MD5_F(x, y, z) ((z) ^ ((x) & ((y) ^ (z)))) @@ -291,9 +291,8 @@ typedef struct md5_ctx my_md5_ctx; * SET reads 4 input bytes in little-endian byte order and stores them * in a properly aligned word in host byte order. * - * The check for little-endian architectures that tolerate unaligned - * memory accesses is just an optimization. Nothing will break if it - * does not work. + * The check for little-endian architectures that tolerate unaligned memory + * accesses is an optimization. Nothing will break if it does not work. */ #if defined(__i386__) || defined(__x86_64__) || defined(__vax__) #define MD5_SET(n) (*(const uint32_t *)(const void *)&ptr[(n) * 4]) diff --git a/lib/mime.c b/lib/mime.c index bf4916f51b..37ea514e4f 100644 --- a/lib/mime.c +++ b/lib/mime.c @@ -310,7 +310,7 @@ static curl_off_t encoder_nop_size(curl_mimepart *part) return part->datasize; } -/* 7-bit encoder: the encoder is just a data validity check. */ +/* 7-bit encoder: the encoder is a data validity check. */ static size_t encoder_7bit_read(char *buffer, size_t size, bool ateof, curl_mimepart *part) { diff --git a/lib/mprintf.c b/lib/mprintf.c index cf30e41c88..c6a4a49429 100644 --- a/lib/mprintf.c +++ b/lib/mprintf.c @@ -224,7 +224,7 @@ static int parsefmt(const char *format, /* illegal combo */ return PFMT_DOLLAR; - /* we got no positional, just get the next arg */ + /* we got no positional, get the next arg */ param = -1; use_dollar = DOLLAR_NOPE; } @@ -938,7 +938,7 @@ static bool out_pointer(void *userp, * All output is sent to the 'stream()' callback, one byte at a time. */ -static int formatf(void *userp, /* untouched by format(), just sent to the +static int formatf(void *userp, /* untouched by format(), sent to the stream() function in the second argument */ /* function pointer called for each output character */ int (*stream)(unsigned char, void *), @@ -972,7 +972,7 @@ static int formatf(void *userp, /* untouched by format(), just sent to the done++; } if(optr->flags & FLAGS_SUBSTR) - /* this is just a substring */ + /* this is a substring */ continue; } diff --git a/lib/multi.c b/lib/multi.c index e6a952334b..5a2f7187ff 100644 --- a/lib/multi.c +++ b/lib/multi.c @@ -374,7 +374,7 @@ static CURLMcode multi_xfers_add(struct Curl_multi *multi, if(capacity < max_capacity) { /* We want `multi->xfers` to have "sufficient" free rows, so that we do - * have to reuse the `mid` from a just removed easy right away. + * have to reuse the `mid` from a removed easy right away. * Since uint_tbl and uint_bset are quite memory efficient, * regard less than 25% free as insufficient. * (for low capacities, e.g. multi_easy, 4 or less). */ @@ -627,7 +627,7 @@ static void multi_done_locked(struct connectdata *conn, return; } - data->state.done = TRUE; /* called just now! */ + data->state.done = TRUE; /* called now! */ data->state.recent_conn_id = conn->connection_id; Curl_resolv_unlink(data, &data->state.dns[0]); /* done with this */ @@ -1461,7 +1461,7 @@ static CURLMcode multi_wait(struct Curl_multi *multi, #endif int pollrc; #ifdef USE_WINSOCK - if(cpfds.n) /* just pre-check with Winsock */ + if(cpfds.n) /* pre-check with Winsock */ pollrc = Curl_poll(cpfds.pfds, cpfds.n, 0); else pollrc = 0; @@ -2015,7 +2015,7 @@ static CURLMcode state_performing(struct Curl_easy *data, if(data->req.newurl || retry) { followtype follow = FOLLOW_NONE; if(!retry) { - /* if the URL is a follow-location and not just a retried request then + /* if the URL is a follow-location and not a retried request then figure out the URL here */ curlx_free(newurl); newurl = data->req.newurl; @@ -2985,7 +2985,7 @@ void Curl_multi_will_close(struct Curl_easy *data, curl_socket_t s) * add_next_timeout() * * Each Curl_easy has a list of timeouts. The add_next_timeout() is called - * when it has just been removed from the splay tree because the timeout has + * when it has been removed from the splay tree because the timeout has * expired. This function is then to advance in the list to pick the next * timeout to use (skip the already expired ones) and add this node back to * the splay tree again. @@ -3551,7 +3551,7 @@ void Curl_expire_ex(struct Curl_easy *data, set.tv_usec -= 1000000; } - /* Remove any timer with the same id just in case. */ + /* Remove any timer with the same id */ multi_deltimeout(data, id); /* Add it to the timer list. It must stay in the list until it has expired diff --git a/lib/multi_ev.c b/lib/multi_ev.c index a399e3f4cb..696a012ebf 100644 --- a/lib/multi_ev.c +++ b/lib/multi_ev.c @@ -568,9 +568,9 @@ void Curl_multi_ev_dirty_xfers(struct Curl_multi *multi, /* Unmatched socket, we cannot act on it but we ignore this fact. In real-world tests it has been proved that libevent can in fact give - the application actions even though the socket was just previously + the application actions even though the socket was previously asked to get removed, so thus we better survive stray socket actions - and just move on. */ + and move on. */ if(entry) { struct Curl_easy *data; uint32_t mid; diff --git a/lib/noproxy.c b/lib/noproxy.c index 1421fb114d..ee03fd35b9 100644 --- a/lib/noproxy.c +++ b/lib/noproxy.c @@ -201,7 +201,7 @@ bool Curl_check_noproxy(const char *name, const char *no_proxy) if(!strcmp("*", no_proxy)) return TRUE; - /* NO_PROXY was specified and it was not just an asterisk */ + /* NO_PROXY was specified and it was not only an asterisk */ /* Check if name is an IP address; if not, assume it being a hostname. */ namelen = strlen(name); @@ -251,7 +251,7 @@ bool Curl_check_noproxy(const char *name, const char *no_proxy) while(*p == ',') p++; } /* while(*p) */ - } /* NO_PROXY was specified and it was not just an asterisk */ + } /* NO_PROXY was specified and it was not only an asterisk */ return FALSE; } diff --git a/lib/parsedate.c b/lib/parsedate.c index db450f10f7..35c7c7af26 100644 --- a/lib/parsedate.c +++ b/lib/parsedate.c @@ -387,7 +387,7 @@ static int parsedate(const char *date, time_t *output) } if(!found && (tzoff == -1)) { - /* this just must be a time zone string */ + /* this must be a time zone string */ tzoff = checktz(date, len); if(tzoff != -1) found = TRUE; diff --git a/lib/pingpong.c b/lib/pingpong.c index 572aec78c0..4ede0a5985 100644 --- a/lib/pingpong.c +++ b/lib/pingpong.c @@ -94,7 +94,7 @@ CURLcode Curl_pp_statemach(struct Curl_easy *data, if(Curl_conn_data_pending(data, FIRSTSOCKET)) rc = 1; else if(pp->overflow) - /* We are receiving and there is data in the cache so just read it */ + /* We are receiving and there is data in the cache so read it */ rc = 1; else if(!pp->sendleft && Curl_conn_data_pending(data, FIRSTSOCKET)) /* We are receiving and there is data ready in the SSL library */ diff --git a/lib/ratelimit.c b/lib/ratelimit.c index f27dff9f4f..2d88cdd7c9 100644 --- a/lib/ratelimit.c +++ b/lib/ratelimit.c @@ -105,7 +105,7 @@ static void rlimit_tune_steps(struct Curl_rlimit *r, /* Calculate tokens for the last step and the ones before. */ tokens_last = tokens_total / 100; - if(!tokens_last) /* less than 100 total, just use 1 */ + if(!tokens_last) /* less than 100 total, use 1 */ tokens_last = 1; else if(tokens_last > CURL_RLIMIT_MIN_RATE) tokens_last = CURL_RLIMIT_MIN_RATE; diff --git a/lib/request.h b/lib/request.h index bcdd3168da..5332d48538 100644 --- a/lib/request.h +++ b/lib/request.h @@ -32,7 +32,7 @@ struct UserDefined; enum expect100 { - EXP100_SEND_DATA, /* enough waiting, just send the body now */ + EXP100_SEND_DATA, /* enough waiting, send the body now */ EXP100_AWAITING_CONTINUE, /* waiting for the 100 Continue header */ EXP100_SENDING_REQUEST, /* still sending the request but will wait for the 100 header once done with the request */ diff --git a/lib/rtsp.c b/lib/rtsp.c index aa783ef06d..11848c8282 100644 --- a/lib/rtsp.c +++ b/lib/rtsp.c @@ -926,7 +926,7 @@ static CURLcode rtsp_rtp_write_resp(struct Curl_easy *data, out: if((data->set.rtspreq == RTSPREQ_RECEIVE) && (rtspc->state == RTP_PARSE_SKIP)) { - /* In special mode RECEIVE, we just process one chunk of network + /* In special mode RECEIVE, we process one chunk of network * data, so we stop the transfer here, if we have no incomplete * RTP message pending. */ data->req.download_done = TRUE; diff --git a/lib/select.c b/lib/select.c index a3d77145ce..9a11924976 100644 --- a/lib/select.c +++ b/lib/select.c @@ -66,7 +66,7 @@ static int our_select(curl_socket_t maxfd, /* highest socket number */ if((!fds_read || fds_read->fd_count == 0) && (!fds_write || fds_write->fd_count == 0) && (!fds_err || fds_err->fd_count == 0)) { - /* no sockets, just wait */ + /* no sockets, wait */ return curlx_wait_ms(timeout_ms); } #endif @@ -82,9 +82,9 @@ static int our_select(curl_socket_t maxfd, /* highest socket number */ given as null. At least one must be non-null, and any non-null descriptor set must contain at least one handle to a socket. - It is unclear why Winsock does not just handle this for us instead of + It is unclear why Winsock does not handle this for us instead of calling this an error. Luckily, with Winsock, we can _also_ ask how - many bits are set on an fd_set. So, let's just check it beforehand. + many bits are set on an fd_set. So, let's check it beforehand. */ return select((int)maxfd + 1, fds_read && fds_read->fd_count ? fds_read : NULL, @@ -128,7 +128,7 @@ int Curl_socket_check(curl_socket_t readfd0, /* two sockets to read from */ if((readfd0 == CURL_SOCKET_BAD) && (readfd1 == CURL_SOCKET_BAD) && (writefd == CURL_SOCKET_BAD)) { - /* no sockets, just wait */ + /* no sockets, wait */ return curlx_wait_ms(timeout_ms); } @@ -223,7 +223,7 @@ int Curl_poll(struct pollfd ufds[], unsigned int nfds, timediff_t timeout_ms) } } if(fds_none) { - /* no sockets, just wait */ + /* no sockets, wait */ return curlx_wait_ms(timeout_ms); } diff --git a/lib/sendf.c b/lib/sendf.c index c8b0e28eea..92e77b482a 100644 --- a/lib/sendf.c +++ b/lib/sendf.c @@ -697,7 +697,7 @@ static CURLcode cr_in_read(struct Curl_easy *data, case CURL_READFUNC_PAUSE: if(data->conn->scheme->flags & PROTOPT_NONETWORK) { /* protocols that work without network cannot be paused. This is - actually only FILE:// just now, and it cannot pause since the transfer + actually only FILE:// now, and it cannot pause since the transfer is not done using the "normal" procedure. */ failf(data, "Read callback asked for PAUSE when not supported"); result = CURLE_READ_ERROR; diff --git a/lib/setopt.c b/lib/setopt.c index 5d5da804d4..ea69ccd821 100644 --- a/lib/setopt.c +++ b/lib/setopt.c @@ -359,8 +359,8 @@ CURLcode Curl_setopt_SSLVERSION(struct Curl_easy *data, CURLoption option, static CURLcode setopt_RTSP_REQUEST(struct Curl_easy *data, long arg) { /* - * Set the RTSP request method (OPTIONS, SETUP, PLAY, etc...) - * Would this be better if the RTSPREQ_* were just moved into here? + * Set the RTSP request method (OPTIONS, SETUP, PLAY, etc...) Would this be + * better if the RTSPREQ_* were moved into here? */ Curl_RtspReq rtspreq = RTSPREQ_NONE; switch(arg) { @@ -1374,7 +1374,7 @@ static CURLcode setopt_slist(struct Curl_easy *data, CURLoption option, * Entries added this way will remain in the cache until explicitly * removed or the handle is cleaned up. * - * Prefix the HOST with plus sign (+) to have the entry expire just like + * Prefix the HOST with plus sign (+) to have the entry expire like * automatically added entries. * * Prefix the HOST with dash (-) to _remove_ the entry from the cache. @@ -2050,10 +2050,9 @@ static CURLcode setopt_cptr(struct Curl_easy *data, CURLoption option, */ return Curl_setstropt(&s->str[STRING_CUSTOMREQUEST], ptr); - /* we do not set - s->method = HTTPREQ_CUSTOM; - here, we continue as if we were using the already set type - and this just changes the actual request keyword */ + /* we do not set s->method = HTTPREQ_CUSTOM; here, we continue as if we + were using the already set type and this changes the actual request + keyword */ case CURLOPT_SERVICE_NAME: /* * Set authentication service name for DIGEST-MD5, Kerberos 5 and SPNEGO diff --git a/lib/setup-vms.h b/lib/setup-vms.h index 331d5edcae..35d12f0b42 100644 --- a/lib/setup-vms.h +++ b/lib/setup-vms.h @@ -86,7 +86,7 @@ static char *vms_translate_path(const char *path) char *test_str; /* See if the result is in VMS format, if not, we are done */ - /* Assume that this is a PATH, not just some data */ + /* Assume that this is a PATH, not some data */ test_str = strpbrk(path, ":[<^"); if(!test_str) { return (char *)path; @@ -165,7 +165,7 @@ static struct passwd *vms_getpwuid(uid_t uid) return my_passwd; } - /* If no changes needed just return it */ + /* If no changes needed, return it */ if(unix_path == my_passwd->pw_dir) { return my_passwd; } diff --git a/lib/setup-win32.h b/lib/setup-win32.h index caf2f8942e..6a89b966de 100644 --- a/lib/setup-win32.h +++ b/lib/setup-win32.h @@ -64,12 +64,12 @@ #endif /* - * Include header files for Windows builds before redefining anything. - * Use this preprocessor block only to include or exclude windows.h, - * winsock2.h or ws2tcpip.h. Any other Windows thing belongs - * to any other further and independent block. Under Cygwin things work - * just as under Linux (e.g. ) and the Winsock headers should - * never be included when __CYGWIN__ is defined. + * Include header files for Windows builds before redefining anything. Use + * this preprocessor block only to include or exclude windows.h, winsock2.h or + * ws2tcpip.h. Any other Windows thing belongs to any other further and + * independent block. Under Cygwin things work as under Linux (e.g. + * ) and the Winsock headers should never be included when + * __CYGWIN__ is defined. */ #ifdef _WIN32 # if defined(UNICODE) && !defined(_UNICODE) diff --git a/lib/smtp.c b/lib/smtp.c index 7b0d242afb..eee044e375 100644 --- a/lib/smtp.c +++ b/lib/smtp.c @@ -315,7 +315,7 @@ static CURLcode cr_eob_init(struct Curl_easy *data, struct cr_eob_ctx *ctx = reader->ctx; (void)data; /* The first char we read is the first on a line, as if we had - * read CRLF just before */ + * read CRLF before */ ctx->n_eob = 2; Curl_bufq_init2(&ctx->buf, (16 * 1024), 1, BUFQ_OPT_SOFT_LIMIT); return CURLE_OK; @@ -354,7 +354,7 @@ static CURLcode cr_eob_read(struct Curl_easy *data, ctx->read_eos = eos; if(nread) { if(!ctx->n_eob && !memchr(buf, SMTP_EOB[0], nread)) { - /* not in the middle of a match, no EOB start found, just pass */ + /* not in the middle of a match, no EOB start found, pass */ *pnread = nread; *peos = FALSE; return CURLE_OK; @@ -403,7 +403,7 @@ static CURLcode cr_eob_read(struct Curl_easy *data, CURL_TRC_SMTP(data, "auto-ending mail body with '\\r\\n.\\r\\n'"); switch(ctx->n_eob) { case 2: - /* seen a CRLF at the end, just add the remainder */ + /* seen a CRLF at the end, add the remainder */ eob = &SMTP_EOB[2]; break; case 3: diff --git a/lib/splay.c b/lib/splay.c index 5e4cd54d27..ddab7a4d6e 100644 --- a/lib/splay.c +++ b/lib/splay.c @@ -241,7 +241,7 @@ int Curl_splayremove(struct Curl_tree *t, to remove, as otherwise we might be trying to remove a node that is not actually in the tree. - We cannot just compare the keys here as a double remove in quick + We cannot compare the keys here as a double remove in quick succession of a node with key != SPLAY_SUBNODE && same != NULL could return the same key but a different node. */ DEBUGASSERT(t == removenode); @@ -252,7 +252,7 @@ int Curl_splayremove(struct Curl_tree *t, remove the root node of a list of nodes with identical keys. */ x = t->samen; if(x != t) { - /* 'x' is the new root node, we just make it use the root node's + /* 'x' is the new root node, we make it use the root node's smaller/larger links */ x->key = t->key; diff --git a/lib/strequal.c b/lib/strequal.c index e712691bdc..a352fda70b 100644 --- a/lib/strequal.c +++ b/lib/strequal.c @@ -42,7 +42,7 @@ static int casecompare(const char *first, const char *second) second++; } /* If we are here either the strings are the same or the length is different. - We can just test if the "current" character is non-zero for one and zero + We can test if the "current" character is non-zero for one and zero for the other. Note that the characters may not be exactly the same even if they match, we only want to compare zero-ness. */ return !*first == !*second; diff --git a/lib/system_win32.c b/lib/system_win32.c index c951da9d99..0f665cd620 100644 --- a/lib/system_win32.c +++ b/lib/system_win32.c @@ -32,7 +32,7 @@ CURLcode Curl_win32_init(long flags) { /* CURL_GLOBAL_WIN32 controls the *optional* part of the initialization which - is just for Winsock at the moment. Any required Win32 initialization + is for Winsock at the moment. Any required Win32 initialization should take place after this block. */ if(flags & CURL_GLOBAL_WIN32) { #ifdef USE_WINSOCK diff --git a/lib/telnet.c b/lib/telnet.c index 678fe656a3..5bae99e1fc 100644 --- a/lib/telnet.c +++ b/lib/telnet.c @@ -211,8 +211,8 @@ static CURLcode init_telnet(struct Curl_easy *data) */ tn->him_preferred[CURL_TELOPT_ECHO] = CURL_YES; - /* Set the subnegotiation fields to send information - just after negotiation passed (do/will) + /* Set the subnegotiation fields to send information after negotiation + passed (do/will) Default values are (0,0) initialized by calloc. According to the RFC1013 it is valid: @@ -961,7 +961,7 @@ static CURLcode check_telnet_options(struct Curl_easy *data, /* if the option contains an IAC code, it should be escaped in the output, but as we cannot think of any legit way to send that as part of the content we - rather just ban its use instead */ + rather ban its use instead */ static bool bad_option(const char *data) { return !data || !!strchr(data, CURL_IAC); @@ -1293,7 +1293,7 @@ static CURLcode telnet_do(struct Curl_easy *data, bool *done) /* If stdin_handle is a pipe, use PeekNamedPipe() method to check it, else use the old WaitForMultipleObjects() way */ if(GetFileType(stdin_handle) == FILE_TYPE_PIPE || data->set.is_fread_set) { - /* Do not wait for stdin_handle, just wait for event_handle */ + /* Do not wait for stdin_handle, wait for event_handle */ obj_count = 1; /* Check stdin_handle per 100 milliseconds */ wait_timeout = 100; diff --git a/lib/tftp.c b/lib/tftp.c index 1e1bf41577..6bc6f0f473 100644 --- a/lib/tftp.c +++ b/lib/tftp.c @@ -536,7 +536,7 @@ static CURLcode tftp_rx(struct tftp_conn *state, tftp_event_t event) infof(data, "Received last DATA packet block %d again.", rblock); } else { - /* totally unexpected, just log it */ + /* totally unexpected, log it */ infof(data, "Received unexpected DATA packet block %d, expecting block %d", rblock, NEXT_BLOCKNUM(state->block)); diff --git a/lib/transfer.c b/lib/transfer.c index 8153e0622f..d9909c494b 100644 --- a/lib/transfer.c +++ b/lib/transfer.c @@ -254,10 +254,10 @@ static CURLcode sendrecv_dl(struct Curl_easy *data, #if 0 DEBUGF(infof(data, "dl_rlimit, available=%" FMT_OFF_T, dl_avail)); #endif - /* In case of rate limited downloads: if this loop already got - * data and less than 16k is left in the limit, break out. - * We want to stutter a bit to keep in the limit, but too small - * receives will just cost cpu unnecessarily. */ + /* In case of rate limited downloads: if this loop already got data and + * less than 16k is left in the limit, break out. We want to stutter a + * bit to keep in the limit, but too small receives will cost cpu + * unnecessarily. */ if(dl_avail <= 0) { rate_limited = TRUE; break; @@ -406,7 +406,7 @@ CURLcode Curl_sendrecv(struct Curl_easy *data) } else { /* - * The transfer has been performed. Just make some general checks before + * The transfer has been performed. Make some general checks before * returning. */ if(!(data->req.no_body) && (k->size != -1) && @@ -666,11 +666,10 @@ CURLcode Curl_retry_request(struct Curl_easy *data, char **url) return CURLE_OUT_OF_MEMORY; connclose(conn, "retry"); /* close this connection */ - conn->bits.retry = TRUE; /* mark this as a connection we are about - to retry. Marking it this way should - prevent i.e HTTP transfers to return - error just because nothing has been - transferred! */ + conn->bits.retry = TRUE; /* mark this as a connection we are about to + retry. Marking it this way should prevent i.e + HTTP transfers to return error because nothing + has been transferred! */ Curl_creader_set_rewind(data, TRUE); } return CURLE_OK; @@ -704,9 +703,9 @@ static void xfer_setup( k->shutdown = FALSE; k->shutdown_err_ignore = FALSE; - /* The code sequence below is placed in this function just because all - necessary input is not always known in do_complete() as this function may - be called after that */ + /* The code sequence below is placed in this function because all necessary + input is not always known in do_complete() as this function may be called + after that */ if(!k->header && (recv_size > 0)) Curl_pgrsSetDownloadSize(data, recv_size); diff --git a/lib/url.c b/lib/url.c index 495de2297c..93e361deea 100644 --- a/lib/url.c +++ b/lib/url.c @@ -125,9 +125,9 @@ static void data_priority_cleanup(struct Curl_easy *data); #define data_priority_cleanup(x) #endif -/* Some parts of the code (e.g. chunked encoding) assume this buffer has at - * more than just a few bytes to play with. Do not let it become too small or - * bad things will happen. +/* Some parts of the code (e.g. chunked encoding) assume this buffer has more + * than a few bytes to play with. Do not let it become too small or bad things + * will happen. */ #if READBUFFER_SIZE < READBUFFER_MIN # error READBUFFER_SIZE is too small @@ -254,7 +254,7 @@ CURLcode Curl_close(struct Curl_easy **datap) * handle might check the magic and so might any * DEBUGFUNCTION invoked for tracing */ - /* freed here just in case DONE was not called */ + /* freed here in case DONE was not called */ Curl_req_free(&data->req, data); /* Close down all open SSL info and sessions */ @@ -617,7 +617,7 @@ static bool socks_proxy_info_matches(const struct proxy_info *data, #endif /* A connection has to have been idle for less than 'conn_max_idle_ms' - (the success rate is just too low after this), or created less than + (the success rate is too low after this), or created less than 'conn_max_age_ms' ago, to be subject for reuse. */ static bool conn_maxage(struct Curl_easy *data, struct connectdata *conn, @@ -1411,7 +1411,7 @@ static struct connectdata *allocate_conn(struct Curl_easy *data) conn->http_proxy.proxytype = data->set.proxytype; conn->socks_proxy.proxytype = CURLPROXY_SOCKS4; - /* note that these two proxy bits are now just on what looks to be + /* note that these two proxy bits are set on what looks to be requested, they may be altered down the road */ conn->bits.proxy = (data->set.str[STRING_PROXY] && *data->set.str[STRING_PROXY]); @@ -3071,9 +3071,9 @@ static CURLcode resolve_unix(struct Curl_easy *data, DEBUGASSERT(unix_path); *pdns = NULL; - /* Unix domain sockets are local. The host gets ignored, just use the - * specified domain socket address. Do not cache "DNS entries". There is - * no DNS involved and we already have the file system path available. */ + /* Unix domain sockets are local. The host gets ignored, use the specified + * domain socket address. Do not cache "DNS entries". There is no DNS + * involved and we already have the file system path available. */ hostaddr = curlx_calloc(1, sizeof(struct Curl_dns_entry)); if(!hostaddr) return CURLE_OUT_OF_MEMORY; diff --git a/lib/urlapi.c b/lib/urlapi.c index c59f239756..f392b501bd 100644 --- a/lib/urlapi.c +++ b/lib/urlapi.c @@ -361,8 +361,8 @@ UNITTEST CURLUcode Curl_parse_port(struct Curl_URL *u, struct dynbuf *host, size_t keep = portptr - hostname; /* Browser behavior adaptation. If there is a colon with no digits after, - just cut off the name there which makes us ignore the colon and just - use the default port. Firefox, Chrome and Safari all do that. + cut off the name there which makes us ignore the colon and use the + default port. Firefox, Chrome and Safari all do that. Do not do it if the URL has no scheme, to make something that looks like a scheme not work! @@ -1078,7 +1078,7 @@ static CURLUcode handle_path(CURLU *u, const char *path, } if(pathlen <= 1) { - /* there is no path left or just the slash, unset */ + /* there is no path left or the slash, unset */ path = NULL; } else { @@ -1089,7 +1089,7 @@ static CURLUcode handle_path(CURLU *u, const char *path, path = u->path; } else if(flags & CURLU_URLENCODE) - /* it might have encoded more than just the path so cut it */ + /* it might have encoded more than the path so cut it */ u->path[pathlen] = 0; if(!(flags & CURLU_PATH_AS_IS)) { diff --git a/lib/urldata.h b/lib/urldata.h index b437771517..138405c3f3 100644 --- a/lib/urldata.h +++ b/lib/urldata.h @@ -671,7 +671,7 @@ struct connectdata { * for concurrency reasons. That multi might run in another thread. * `attached_multi` is set by the first transfer attached and cleared * when the last one is detached. - * NEVER call anything on this multi, just check for equality. */ + * NEVER call anything on this multi, check for equality. */ struct Curl_multi *attached_multi; /*************** Request - specific items ************/ @@ -1268,7 +1268,7 @@ enum dupstring { STRING_COPYPOSTFIELDS, /* if POST, set the fields' values here */ - STRING_LAST /* not used, just an end-of-list marker */ + STRING_LAST /* not used, an end-of-list marker */ }; enum dupblob { diff --git a/lib/vquic/curl_ngtcp2.c b/lib/vquic/curl_ngtcp2.c index 54e1e37fd1..84ba839d47 100644 --- a/lib/vquic/curl_ngtcp2.c +++ b/lib/vquic/curl_ngtcp2.c @@ -784,7 +784,7 @@ static void cb_rand(uint8_t *dest, size_t destlen, result = Curl_rand(NULL, dest, destlen); if(result) { /* cb_rand is only used for non-cryptographic context. If Curl_rand - failed, just fill 0 and call it *random*. */ + failed, fill 0 and call it *random*. */ memset(dest, 0, destlen); } } @@ -2048,8 +2048,8 @@ static CURLcode cf_progress_egress(struct Curl_cfilter *cf, } else if(nread > gsolen || (gsolen > path_max_payload_size && nread != gsolen)) { - /* The just added packet is a PMTUD *or* the one(s) before the - * just added were PMTUD and the last one is smaller. + /* The added packet is a PMTUD *or* the one(s) before the + * added were PMTUD and the last one is smaller. * Flush the buffer before the last add. */ curlcode = vquic_send_tail_split(cf, data, &ctx->q, gsolen, nread, nread); diff --git a/lib/vquic/curl_quiche.c b/lib/vquic/curl_quiche.c index 8ee3e9c087..baf62df39d 100644 --- a/lib/vquic/curl_quiche.c +++ b/lib/vquic/curl_quiche.c @@ -60,7 +60,7 @@ #define H3_STREAM_WINDOW_SIZE (1024 * 128) #define H3_STREAM_CHUNK_SIZE (1024 * 16) -/* Receive and Send max number of chunks just follows from the +/* Receive and Send max number of chunks follows from the * chunk size and window size */ #define H3_STREAM_RECV_CHUNKS \ (H3_STREAM_WINDOW_SIZE / H3_STREAM_CHUNK_SIZE) @@ -126,7 +126,7 @@ static void cf_quiche_ctx_init(struct cf_quiche_ctx *ctx) static void cf_quiche_ctx_free(struct cf_quiche_ctx *ctx) { if(ctx && ctx->initialized) { - /* quiche just freed it */ + /* quiche freed it */ ctx->tls.ossl.ssl = NULL; Curl_vquic_tls_cleanup(&ctx->tls); Curl_ssl_peer_cleanup(&ctx->peer); @@ -1124,7 +1124,7 @@ static CURLcode cf_quiche_send(struct Curl_cfilter *cf, struct Curl_easy *data, * server. If the server has send us a final response, we should * silently discard the send data. * This happens for example on redirects where the server, instead - * of reading the full request body just closed the stream after + * of reading the full request body closed the stream after * sending the 30x response. * This is sort of a race: had the transfer loop called recv first, * it would see the response and stop/discard sending on its own- */ diff --git a/lib/vquic/vquic-tls.h b/lib/vquic/vquic-tls.h index a947cd277b..33adec2bc5 100644 --- a/lib/vquic/vquic-tls.h +++ b/lib/vquic/vquic-tls.h @@ -53,7 +53,7 @@ struct curl_tls_ctx { * Callback passed to `Curl_vquic_tls_init()` that can * do early initializations on the not otherwise configured TLS * instances created. This varies by TLS backend: - * - openssl/wolfssl: SSL_CTX* has just been created + * - openssl/wolfssl: SSL_CTX* has been created * - gnutls: gtls_client_init() has run */ typedef CURLcode Curl_vquic_tls_ctx_setup(struct Curl_cfilter *cf, diff --git a/lib/vquic/vquic.c b/lib/vquic/vquic.c index 1d0446aeee..1eaabb95f5 100644 --- a/lib/vquic/vquic.c +++ b/lib/vquic/vquic.c @@ -170,7 +170,7 @@ static CURLcode do_sendmsg(struct Curl_cfilter *cf, #endif return CURLE_AGAIN; case SOCKEMSGSIZE: - /* UDP datagram is too large; caused by PMTUD. Just let it be lost. */ + /* UDP datagram is too large; caused by PMTUD. Let it be lost. */ *psent = pktlen; break; case EIO: @@ -214,7 +214,7 @@ static CURLcode do_sendmsg(struct Curl_cfilter *cf, result = CURLE_SEND_ERROR; goto out; } - /* UDP datagram is too large; caused by PMTUD. Just let it be lost. */ + /* UDP datagram is too large; caused by PMTUD. Let it be lost. */ *psent = pktlen; } } diff --git a/lib/vssh/libssh.c b/lib/vssh/libssh.c index 7bd2101e52..3a2a52e1f8 100644 --- a/lib/vssh/libssh.c +++ b/lib/vssh/libssh.c @@ -1129,7 +1129,7 @@ static int myssh_in_SFTP_DOWNLOAD_STAT(struct Curl_easy *data, (attrs->size == 0)) { /* * sftp_fstat did not return an error, so maybe the server - * just does not support stat() + * does not support stat() * OR the server does not return a file size with a stat() * OR file size is 0 */ diff --git a/lib/vssh/libssh2.c b/lib/vssh/libssh2.c index 2d40e04fe4..fe2ffaf55c 100644 --- a/lib/vssh/libssh2.c +++ b/lib/vssh/libssh2.c @@ -1289,7 +1289,7 @@ static CURLcode sftp_download_stat(struct Curl_easy *data, (attrs.filesize == 0)) { /* * libssh2_sftp_open() did not return an error, so maybe the server - * just does not support stat() + * does not support stat() * OR the server does not return a file size with a stat() * OR file size is 0 */ diff --git a/lib/vtls/mbedtls.c b/lib/vtls/mbedtls.c index ce9f3ac6bb..2bac406c35 100644 --- a/lib/vtls/mbedtls.c +++ b/lib/vtls/mbedtls.c @@ -493,7 +493,7 @@ static CURLcode mbed_load_cacert(struct Curl_cfilter *cf, if(ca_info_blob && verifypeer) { #ifdef MBEDTLS_PEM_PARSE_C - /* if DER or a null-terminated PEM just process using + /* if DER or a null-terminated PEM process using mbedtls_x509_crt_parse(). */ if((ssl_cert_type && curl_strequal(ssl_cert_type, "DER")) || ((char *)(ca_info_blob->data))[ca_info_blob->len - 1] == '\0') { @@ -605,7 +605,7 @@ static CURLcode mbed_load_clicert(struct Curl_cfilter *cf, if(ssl_cert_blob) { #ifdef MBEDTLS_PEM_PARSE_C - /* if DER or a null-terminated PEM just process using + /* if DER or a null-terminated PEM process using mbedtls_x509_crt_parse(). */ if((ssl_cert_type && curl_strequal(ssl_cert_type, "DER")) || ((char *)(ssl_cert_blob->data))[ssl_cert_blob->len - 1] == '\0') { diff --git a/lib/vtls/openssl.c b/lib/vtls/openssl.c index b5263d398b..fb2a3fba6b 100644 --- a/lib/vtls/openssl.c +++ b/lib/vtls/openssl.c @@ -1955,7 +1955,7 @@ static CURLcode ossl_shutdown(struct Curl_cfilter *cf, CURL_TRC_CF(data, cf, "SSL shutdown not received, but closed"); *done = TRUE; break; - case SSL_ERROR_NONE: /* just did not get anything */ + case SSL_ERROR_NONE: /* did not get anything */ case SSL_ERROR_WANT_READ: /* SSL has send its notify and now wants to read the reply * from the server. We are not really interested in that. */ @@ -2791,7 +2791,7 @@ static CURLcode load_cacert_from_memory(X509_STORE *store, BIO *cbio = NULL; STACK_OF(X509_INFO) *inf = NULL; - /* everything else is just a reference */ + /* everything else is a reference */ int i, count = 0; X509_INFO *itmp = NULL; @@ -3400,7 +3400,7 @@ ossl_init_session_and_alpns(struct ossl_ctx *octx, SSL_SESSION *ssl_session = NULL; /* If OpenSSL does not accept the session from the cache, this - * is not an error. We just continue without it. */ + * is not an error. We continue without it. */ ssl_session = d2i_SSL_SESSION(NULL, &der_sessionid, (long)der_sessionid_size); if(ssl_session) { @@ -3778,8 +3778,8 @@ CURLcode Curl_ossl_ctx_init(struct ossl_ctx *octx, The enabled extension concerns the session management. I wonder how often libcurl stops a connection and then resumes a TLS session. Also, sending - the session data is some overhead. I suggest that you just use your - proposed patch (which explicitly disables TICKET). + the session data is some overhead. I suggest that you use your proposed + patch (which explicitly disables TICKET). If someone writes an application with libcurl and OpenSSL who wants to enable the feature, one can do this in the SSL callback. diff --git a/lib/vtls/schannel.c b/lib/vtls/schannel.c index f9b475b122..f97dc65bca 100644 --- a/lib/vtls/schannel.c +++ b/lib/vtls/schannel.c @@ -2284,7 +2284,7 @@ static CURLcode schannel_recv(struct Curl_cfilter *cf, struct Curl_easy *data, backend->recv_sspi_close_notify = TRUE; if(!backend->recv_connection_closed) backend->recv_connection_closed = TRUE; - /* We received the close notify just fine, any error we got + /* We received the close notify fine, any error we got * from the lower filters afterwards (e.g. the socket), is not * an error on the TLS data stream. That one ended here. */ if(result == CURLE_RECV_ERROR) diff --git a/lib/vtls/vtls.h b/lib/vtls/vtls.h index d21df11b4a..b9335bbf18 100644 --- a/lib/vtls/vtls.h +++ b/lib/vtls/vtls.h @@ -249,7 +249,7 @@ extern struct Curl_cftype Curl_cft_ssl_proxy; #else /* if not USE_SSL */ -/* When SSL support is not present, just define away these function calls */ +/* When SSL support is not present, define away these function calls */ #define Curl_ssl_init() 1 #define Curl_ssl_cleanup() Curl_nop_stmt #define Curl_ssl_close_all(x) Curl_nop_stmt diff --git a/lib/vtls/vtls_scache.c b/lib/vtls/vtls_scache.c index 9b7bc84197..a1f29814b3 100644 --- a/lib/vtls/vtls_scache.c +++ b/lib/vtls/vtls_scache.c @@ -58,7 +58,7 @@ struct Curl_ssl_scache_peer { unsigned char key_salt[CURL_SHA256_DIGEST_LENGTH]; /* for entry export */ unsigned char key_hmac[CURL_SHA256_DIGEST_LENGTH]; /* for entry export */ size_t max_sessions; - long age; /* just a number, the higher the more recent */ + long age; /* a number, the higher the more recent */ BIT(hmac_set); /* if key_salt and key_hmac are present */ BIT(exportable); /* sessions for this peer can be exported */ }; @@ -288,7 +288,7 @@ CURLcode Curl_ssl_peer_key_make(struct Curl_cfilter *cf, goto out; *ppeer_key = curlx_dyn_take(&buf, &key_len); - /* we just added printable char, and dynbuf always null-terminates, no need + /* we added printable char, and dynbuf always null-terminates, no need * to track length */ out: diff --git a/lib/vtls/wolfssl.c b/lib/vtls/wolfssl.c index c2d134f7b8..aa841a754a 100644 --- a/lib/vtls/wolfssl.c +++ b/lib/vtls/wolfssl.c @@ -658,7 +658,7 @@ static CURLcode wssl_populate_x509_store(struct Curl_cfilter *cf, return CURLE_SSL_CACERT_BADFILE; } else { - /* Just continue with a warning if no strict certificate + /* continue with a warning if no strict certificate verification is required. */ infof(data, "error setting certificate verify locations," " continuing anyway:"); @@ -1612,7 +1612,7 @@ static CURLcode wssl_send_earlydata(struct Curl_cfilter *cf, int err = wolfSSL_get_error(wssl->ssl, rc); char error_buffer[256]; switch(err) { - case WOLFSSL_ERROR_NONE: /* just did not get anything */ + case WOLFSSL_ERROR_NONE: /* did not get anything */ case WOLFSSL_ERROR_WANT_READ: case WOLFSSL_ERROR_WANT_WRITE: return CURLE_AGAIN; @@ -1741,7 +1741,7 @@ static CURLcode wssl_handshake(struct Curl_cfilter *cf, struct Curl_easy *data) failf(data, " CA signer not available for verification"); return CURLE_SSL_CACERT_BADFILE; } - /* Just continue with a warning if no strict certificate + /* Continue with a warning if no strict certificate verification is required. */ infof(data, "CA signer not available for verification, " "continuing anyway"); @@ -1941,7 +1941,7 @@ static CURLcode wssl_shutdown(struct Curl_cfilter *cf, CURL_TRC_CF(data, cf, "SSL shutdown received"); *done = TRUE; break; - case WOLFSSL_ERROR_NONE: /* just did not get anything */ + case WOLFSSL_ERROR_NONE: /* did not get anything */ case WOLFSSL_ERROR_WANT_READ: /* wolfSSL has send its notify and now wants to read the reply * from the server. We are not really interested in that. */ diff --git a/lib/vtls/x509asn1.c b/lib/vtls/x509asn1.c index e4660801ea..3424456a27 100644 --- a/lib/vtls/x509asn1.c +++ b/lib/vtls/x509asn1.c @@ -369,7 +369,7 @@ static CURLcode utf8asn1str(struct dynbuf *to, int type, const char *from, return CURLE_BAD_FUNCTION_ARGUMENT; if(type == CURL_ASN1_UTF8_STRING) { - /* Just copy. */ + /* copy. */ if(inlength) result = curlx_dyn_addn(to, from, inlength); } diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index 87c26e30d1..5eb7e8282d 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -112,7 +112,7 @@ add_executable(curlinfo EXCLUDE_FROM_ALL "curlinfo.c") target_link_libraries(curlinfo PRIVATE ${CURL_LIBS}) set_target_properties(curlinfo PROPERTIES UNITY_BUILD OFF) -# special libcurltool library just for unittests +# special libcurltool library for unittests add_library(curltool STATIC EXCLUDE_FROM_ALL ${CURL_CFILES} ${CURL_HFILES} ${_curlx_cfiles_lib} ${_curlx_hfiles_lib}) target_compile_definitions(curltool PUBLIC "CURL_STATICLIB" "UNITTESTS") target_link_libraries(curltool PUBLIC ${CURL_LIBS}) diff --git a/src/config2setopts.c b/src/config2setopts.c index a7d0a4e214..17eb8c73e9 100644 --- a/src/config2setopts.c +++ b/src/config2setopts.c @@ -249,7 +249,7 @@ static long tlsversion(unsigned char mintls, tlsver = CURL_SSLVERSION_TLSv1_2; break; case 4: - default: /* just in case */ + default: /* in case */ tlsver = CURL_SSLVERSION_TLSv1_3; break; } @@ -266,7 +266,7 @@ static long tlsversion(unsigned char mintls, tlsver |= CURL_SSLVERSION_MAX_TLSv1_2; break; case 4: - default: /* just in case */ + default: /* in case */ tlsver |= CURL_SSLVERSION_MAX_TLSv1_3; break; } diff --git a/src/tool_cb_dbg.c b/src/tool_cb_dbg.c index 0c8568ff62..c9c14e6d13 100644 --- a/src/tool_cb_dbg.c +++ b/src/tool_cb_dbg.c @@ -228,8 +228,7 @@ int tool_debug_cb(CURL *handle, curl_infotype type, if(!traced_data) { /* if the data is output to a tty and we are sending this debug trace to stderr or stdout, we do not display the alert about the data not - being shown as the data _is_ shown then just not via this - function */ + being shown as the data _is_ shown then not via this function */ if(!global->isatty || ((output != tool_stderr) && (output != stdout))) { if(!newl) diff --git a/src/tool_cb_hdr.c b/src/tool_cb_hdr.c index 76d0f7c458..fc03cf143b 100644 --- a/src/tool_cb_hdr.c +++ b/src/tool_cb_hdr.c @@ -235,7 +235,7 @@ int tool_write_headers(struct HdrCbData *hdrcbdata, FILE *stream) struct curl_slist *h = hdrcbdata->headlist; int rc = 1; while(h) { - /* not "handled", just show it */ + /* not "handled", show it */ size_t len = strlen(h->data); if(len != fwrite(h->data, 1, len, stream)) goto fail; @@ -536,7 +536,7 @@ size_t tool_header_cb(char *ptr, size_t size, size_t nmemb, void *userdata) #endif } else - /* not "handled", just show it */ + /* not "handled", show it */ fwrite(ptr, cb, 1, outs->stream); } return cb; diff --git a/src/tool_cb_prg.c b/src/tool_cb_prg.c index 0280b30849..461ac051fc 100644 --- a/src/tool_cb_prg.c +++ b/src/tool_cb_prg.c @@ -226,7 +226,7 @@ void progressbarinit(struct ProgressData *bar, struct OperationConfig *config) memset(bar, 0, sizeof(struct ProgressData)); /* pass the resume from value through to the progress function so it can - * display progress towards total file not just the part that is left. */ + * display progress towards total file not the part that is left. */ if(config->use_resume) bar->initial_size = config->resume_from; diff --git a/src/tool_cb_see.c b/src/tool_cb_see.c index a1983b94cc..e1ebd8d99f 100644 --- a/src/tool_cb_see.c +++ b/src/tool_cb_see.c @@ -77,7 +77,7 @@ int tool_seek_cb(void *userdata, curl_off_t offset, int whence) #endif if(curl_lseek(per->infd, offset, whence) == LSEEK_ERROR) - /* could not rewind, the reason is in errno but errno is just not portable + /* could not rewind, the reason is in errno but errno is not portable enough and we do not actually care that much why we failed. We will let libcurl know that it may try other means if it wants to. */ return CURL_SEEKFUNC_CANTSEEK; diff --git a/src/tool_formparse.c b/src/tool_formparse.c index 0ec434e2de..20f7e366a0 100644 --- a/src/tool_formparse.c +++ b/src/tool_formparse.c @@ -821,7 +821,7 @@ int formparse(const char *input, SET_TOOL_MIME_PTR(part, type); SET_TOOL_MIME_PTR(part, encoder); - /* *contp could be '\0', so we just check with the delimiter */ + /* *contp could be '\0', so we check with the delimiter */ } while(sep); /* loop if there is another filename */ part = (*mimecurrent)->subparts; /* Set name on group. */ } diff --git a/src/tool_getparam.c b/src/tool_getparam.c index e30a4ad815..e6e641923b 100644 --- a/src/tool_getparam.c +++ b/src/tool_getparam.c @@ -421,7 +421,7 @@ UNITTEST ParameterError parse_cert_parameter(const char *cert_parameter, memcpy(certname_place, param_place, span); param_place += span; certname_place += span; - /* we just ate all the non-special chars. now we are on either a special + /* we ate all the non-special chars. now we are on either a special * char or the end of the string. */ switch(*param_place) { case '\0': @@ -1270,7 +1270,7 @@ static ParameterError parse_ech(struct OperationConfig *config, } /* file done */ } else { - /* Simple case: just a string, with a keyword */ + /* Simple case: a string, with a keyword */ err = getstr(&config->ech, nextarg, DENY_BLANK); } return err; @@ -1415,7 +1415,7 @@ static ParameterError parse_quote(struct OperationConfig *config, err = add2list(&config->postquote, nextarg); break; case '+': - /* prefixed with a plus makes it a just-before-transfer one */ + /* prefixed with a plus makes it an immediately-before-transfer one */ nextarg++; err = add2list(&config->prequote, nextarg); break; @@ -3115,7 +3115,7 @@ ParameterError parse_args(int argc, argv_item_t argv[]) else { bool used; - /* Just add the URL please */ + /* add the URL please */ err = getparameter("--url", orig_opt, &used, config, 0); } diff --git a/src/tool_ipfs.c b/src/tool_ipfs.c index 3d2d5dea84..8e762b8cf0 100644 --- a/src/tool_ipfs.c +++ b/src/tool_ipfs.c @@ -181,7 +181,7 @@ CURLcode ipfs_url_rewrite(CURLU *uh, const char *protocol, char **url, curl_url_set(uh, CURLUPART_PORT, gwport, CURLU_URLENCODE)) goto clean; - /* if the input path is just a slash, clear it */ + /* if the input path is a slash, clear it */ if(inputpath && (inputpath[0] == '/') && !inputpath[1]) *inputpath = '\0'; diff --git a/src/tool_msgs.c b/src/tool_msgs.c index 18e014950d..9bc8ca18f1 100644 --- a/src/tool_msgs.c +++ b/src/tool_msgs.c @@ -55,8 +55,8 @@ static void voutf(const char *prefix, const char *fmt, va_list ap) cut--; } if(cut == 0) - /* not a single cutting position was found, just cut it at the - max text width then! */ + /* not a single cutting position was found, cut it at the max text + width then! */ cut = width - 1; (void)fwrite(ptr, cut + 1, 1, tool_stderr); diff --git a/src/tool_operate.c b/src/tool_operate.c index d97a0c4eb2..cf900364f7 100644 --- a/src/tool_operate.c +++ b/src/tool_operate.c @@ -554,17 +554,17 @@ static CURLcode retrycheck(struct OperationConfig *config, /* truncate file at the position where we started appending */ #if defined(HAVE_FTRUNCATE) && !defined(__DJGPP__) && !defined(__AMIGA__) if(ftruncate(fileno(outs->stream), outs->init)) { - /* when truncate fails, we cannot just append as then we will + /* when truncate fails, we cannot append as then we will create something strange, bail out */ errorf("Failed to truncate file"); return CURLE_WRITE_ERROR; } /* now seek to the end of the file, the position where we - just truncated the file in a large file-safe way */ + truncated the file in a large file-safe way */ rc = fseek(outs->stream, 0, SEEK_END); #else - /* ftruncate is not available, so just reposition the file - to the location we would have truncated it. */ + /* ftruncate is not available, so reposition the file to the location + we would have truncated it. */ rc = curlx_fseek(outs->stream, outs->init, SEEK_SET); #endif if(rc) { @@ -951,7 +951,7 @@ static CURLcode setup_headerfile(struct OperationConfig *config, * Since every transfer has its own file handle for dumping * the headers, we need to open it in append mode, since transfers * might finish in any order. - * The first transfer just clears the file. + * The first transfer clears the file. * * Consider placing the file handle inside the OperationConfig, so * that it does not need to be opened/closed for every transfer. @@ -1995,7 +1995,7 @@ static CURLcode serial_transfers(CURLSH *share) bailout = TRUE; else { do { - /* setup the next one just before we delete this */ + /* setup the next one before we delete this */ result = create_transfer(share, &added, &skipped); if(result) { returncode = result; diff --git a/src/tool_urlglob.c b/src/tool_urlglob.c index cbd97c4e23..f62d9738ec 100644 --- a/src/tool_urlglob.c +++ b/src/tool_urlglob.c @@ -492,8 +492,8 @@ CURLcode glob_url(struct URLGlob *glob, const char *url, curl_off_t *urlnum, FILE *error) { /* - * We can deal with any-size, just make a buffer with the same length - * as the specified URL! + * We can deal with any-size, make a buffer with the same length as the + * specified URL! */ curl_off_t amount = 0; CURLcode result; diff --git a/src/tool_writeout.c b/src/tool_writeout.c index 2700abfa44..22c83efd03 100644 --- a/src/tool_writeout.c +++ b/src/tool_writeout.c @@ -619,7 +619,7 @@ static void separator(const char *sep, size_t seplen, FILE *stream) case '\0': break; default: - /* unknown, just output this */ + /* unknown, output this */ fputc(sep[0], stream); fputc(sep[1], stream); break; @@ -834,7 +834,7 @@ void ourWriteOut(struct OperationConfig *config, struct per_transfer *per, fputs("%output{", stream); } else { - /* illegal syntax, then just output the characters that are used */ + /* illegal syntax, then output the characters that are used */ fputc('%', stream); fputc(ptr[1], stream); ptr += 2; @@ -853,7 +853,7 @@ void ourWriteOut(struct OperationConfig *config, struct per_transfer *per, fputc('\t', stream); break; default: - /* unknown, just output this */ + /* unknown, output this */ fputc(*ptr, stream); fputc(ptr[1], stream); break; diff --git a/tests/unit/README.md b/tests/unit/README.md index 190821b33b..a585f14cbc 100644 --- a/tests/unit/README.md +++ b/tests/unit/README.md @@ -13,8 +13,8 @@ big and complicated, we should split them into smaller and testable ones. `./configure --enable-debug` is required for the unit tests to build. To enable unit tests, there is a separate static libcurl built that is used -exclusively for linking unit test programs. Just build everything as normal, -and then you can run the unit test cases as well. +exclusively for linking unit test programs. Build everything as normal, and +then you can run the unit test cases as well. ## Run Unit Tests