5 \___|\___/|_| \_\_____|
7 Things that could be nice to do in the future
9 Things to do in project curl. Please tell us what you think, contribute and
10 send us patches that improve things!
12 Be aware that these are things that we could do, or have once been considered
13 things we could do. If you want to work on any of these areas, please
14 consider bringing it up for discussions first on the mailing list so that we
15 all agree it is still a good idea for the project!
17 All bugs documented in the KNOWN_BUGS document are subject for fixing!
22 1.4 signal-based resolver timeouts
23 1.5 get rid of PATH_MAX
24 1.6 Modified buffer size approach
25 1.7 Detect when called from within callbacks
26 1.8 CURLOPT_RESOLVE for any port number
27 1.9 Cache negative name resolves
28 1.11 minimize dependencies with dynamicly loaded modules
29 1.12 have form functions use CURL handle argument
30 1.14 Typesafe curl_easy_setopt()
31 1.15 Monitor connections in the connection pool
32 1.16 Try to URL encode given URL
33 1.17 Add support for IRIs
34 1.18 try next proxy if one doesn't work
35 1.19 Timeout idle connections from the pool
36 1.20 SRV and URI DNS records
37 1.21 API for URL parsing/splitting
38 1.23 Offer API to flush the connection pool
39 1.24 TCP Fast Open for windows
40 1.25 Remove the generated include file
42 2. libcurl - multi interface
44 2.2 Better support for same name resolves
45 2.3 Non-blocking curl_multi_remove_handle()
46 2.4 Split connect and authentication process
47 2.5 Edge-triggered sockets should work
50 3.1 Update date and version in man pages
51 3.2 Provide cmake config-file
55 4.2 Alter passive/active on failure and retry
56 4.3 Earlier bad letter detection
57 4.4 REST for large files
59 4.6 GSSAPI via Windows SSPI
60 4.7 STAT for LIST without data connection
63 5.1 Better persistency for HTTP 1.0
64 5.2 support FF3 sqlite cookie files
65 5.3 Rearrange request header order
66 5.4 HTTP Digest using SHA-256
68 5.6 Refuse "downgrade" redirects
69 5.7 Brotli compression
71 5.9 Improve formpost API
72 5.10 Leave secure cookies alone
73 5.11 Chunked transfer multipart formpost
78 6.2 ditch telnet-specific select
79 6.3 feature negotiation debug data
80 6.4 send data in chunks
84 7.2 Enhanced capability support
85 7.3 Add CURLOPT_MAIL_CLIENT option
89 8.2 Enhanced capability support
92 9.1 Enhanced capability support
95 10.1 SASL based authentication mechanisms
98 11.1 File listing support
99 11.2 Honor file timestamps
101 11.4 Create remote directories
107 13.1 Disable specific versions
108 13.2 Provide mutex locking API
109 13.3 Evaluate SSL patches
110 13.4 Cache/share OpenSSL contexts
111 13.5 Export session ids
112 13.6 Provide callback for cert verification
113 13.7 improve configure --with-ssl
115 13.10 Support SSLKEYLOGFILE
116 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
121 14.1 SSL engine stuff
122 14.2 check connection
125 15.1 Add support for client certificate authentication
126 15.2 Add support for custom server certificate validation
127 15.3 Add support for the --ciphers option
130 16.1 Other authentication mechanisms
131 16.2 Add QOP support to GSSAPI authentication
132 16.3 Support binary messages (i.e.: non-base64)
136 17.2 SFTP performance
137 17.3 Support better than MD5 hostkey hash
139 18. Command line tool
142 18.3 prevent file overwriting
143 18.4 simultaneous parallel transfers
144 18.5 provide formpost headers
145 18.6 warning when setting an option
146 18.7 warning when sending binary output to terminal
147 18.8 offer color-coded HTTP header output
148 18.9 Choose the name of file in braces for complex URLs
149 18.10 improve how curl works in a windows console window
150 18.11 -w output to stderr
151 18.12 keep running, read instructions from pipe/socket
152 18.13 support metalink in http headers
153 18.14 --fail without --location should treat 3xx as a failure
154 18.15 --retry should resume
155 18.16 send only part of --data
156 18.17 consider file name from the redirected URL with -O ?
160 19.2 Enable PIE and RELRO by default
164 20.2 nicer lacking perl message
165 20.3 more protocols supported
166 20.4 more platforms supported
167 20.5 Add support for concurrent connections
168 20.6 Use the RFC6265 test suite
171 21.1 http-style HEAD output for FTP
172 21.2 combine error codes
173 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype
175 22. Next major release
176 22.1 cleanup return codes
177 22.2 remove obsolete defines
179 22.4 remove several functions
180 22.5 remove CURLOPT_FAILONERROR
181 22.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
182 22.7 remove progress meter from libcurl
183 22.8 remove 'curl_httppost' from public
185 ==============================================================================
189 1.2 More data sharing
191 curl_share_* functions already exist and work, and they can be extended to
192 share more. For example, enable sharing of the ares channel and the
197 Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
198 SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
199 To support IPv6 interface addresses for network interfaces properly.
201 1.4 signal-based resolver timeouts
203 libcurl built without an asynchronous resolver library uses alarm() to time
204 out DNS lookups. When a timeout occurs, this causes libcurl to jump from the
205 signal handler back into the library with a sigsetjmp, which effectively
206 causes libcurl to continue running within the signal handler. This is
207 non-portable and could cause problems on some platforms. A discussion on the
208 problem is available at https://curl.haxx.se/mail/lib-2008-09/0197.html
210 Also, alarm() provides timeout resolution only to the nearest second. alarm
211 ought to be replaced by setitimer on systems that support it.
213 1.5 get rid of PATH_MAX
215 Having code use and rely on PATH_MAX is not nice:
216 http://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
218 Currently the SSH based code uses it a bit, but to remove PATH_MAX from there
219 we need libssh2 to properly tell us when we pass in a too small buffer and
220 its current API (as of libssh2 1.2.7) doesn't.
222 1.6 Modified buffer size approach
224 Current libcurl allocates a fixed 16K size buffer for download and an
225 additional 16K for upload. They are always unconditionally part of the easy
226 handle. If CRLF translations are requested, an additional 32K "scratch
227 buffer" is allocated. A total of 64K transfer buffers in the worst case.
229 First, while the handles are not actually in use these buffers could be freed
230 so that lingering handles just kept in queues or whatever waste less memory.
232 Secondly, SFTP is a protocol that needs to handle many ~30K blocks at once
233 since each need to be individually acked and therefore libssh2 must be
234 allowed to send (or receive) many separate ones in parallel to achieve high
235 transfer speeds. A current libcurl build with a 16K buffer makes that
236 impossible, but one with a 512K buffer will reach MUCH faster transfers. But
237 allocating 512K unconditionally for all buffers just in case they would like
238 to do fast SFTP transfers at some point is not a good solution either.
240 Dynamically allocate buffer size depending on protocol in use in combination
241 with freeing it after each individual transfer? Other suggestions?
243 1.7 Detect when called from within callbacks
245 We should set a state variable before calling callbacks, so that we
246 subsequently can add code within libcurl that returns error if called within
247 callbacks for when that's not supported.
249 1.8 CURLOPT_RESOLVE for any port number
251 This option allows applications to set a replacement IP address for a given
252 host + port pair. Consider making support for providing a replacement address
253 for the host name on all port numbers.
255 See https://github.com/curl/curl/issues/1264
257 1.9 Cache negative name resolves
259 A name resolve that has failed is likely to fail when made again within a
260 short period of time. Currently we only cache positive responses.
262 1.11 minimize dependencies with dynamicly loaded modules
264 We can create a system with loadable modules/plug-ins, where these modules
265 would be the ones that link to 3rd party libs. That would allow us to avoid
266 having to load ALL dependencies since only the necessary ones for this
267 app/invoke/used protocols would be necessary to load. See
268 https://github.com/curl/curl/issues/349
270 1.12 have form functions use CURL handle argument
272 curl_formadd() and curl_formget() both currently have no CURL handle
273 argument, but both can use a callback that is set in the easy handle, and
274 thus curl_formget() with callback cannot function without first having
275 curl_easy_perform() (or similar) called - which is hard to grasp and a design
278 The curl_formadd() design can probably also be reconsidered to make it easier
279 to use and less error-prone. Probably easiest by splitting it into several
282 1.14 Typesafe curl_easy_setopt()
284 One of the most common problems in libcurl using applications is the lack of
285 type checks for curl_easy_setopt() which happens because it accepts varargs
286 and thus can take any type.
288 One possible solution to this is to introduce a few different versions of the
289 setopt version for the different kinds of data you can set.
291 curl_easy_set_num() - sets a long value
293 curl_easy_set_large() - sets a curl_off_t value
295 curl_easy_set_ptr() - sets a pointer
297 curl_easy_set_cb() - sets a callback PLUS its callback data
299 1.15 Monitor connections in the connection pool
301 libcurl's connection cache or pool holds a number of open connections for the
302 purpose of possible subsequent connection reuse. It may contain a few up to a
303 significant amount of connections. Currently, libcurl leaves all connections
304 as they are and first when a connection is iterated over for matching or
305 reuse purpose it is verified that it is still alive.
307 Those connections may get closed by the server side for idleness or they may
308 get a HTTP/2 ping from the peer to verify that they're still alive. By adding
309 monitoring of the connections while in the pool, libcurl can detect dead
310 connections (and close them) better and earlier, and it can handle HTTP/2
311 pings to keep such ones alive even when not actively doing transfers on them.
313 1.16 Try to URL encode given URL
315 Given a URL that for example contains spaces, libcurl could have an option
316 that would try somewhat harder than it does now and convert spaces to %20 and
317 perhaps URL encoded byte values over 128 etc (basically do what the redirect
318 following code already does).
320 https://github.com/curl/curl/issues/514
322 1.17 Add support for IRIs
324 IRIs (RFC 3987) allow localized, non-ascii, names in the URL. To properly
325 support this, curl/libcurl would need to translate/encode the given input
326 from the input string encoding into percent encoded output "over the wire".
328 To make that work smoothly for curl users even on Windows, curl would
329 probably need to be able to convert from several input encodings.
331 1.18 try next proxy if one doesn't work
333 Allow an application to specify a list of proxies to try, and failing to
334 connect to the first go on and try the next instead until the list is
335 exhausted. Browsers support this feature at least when they specify proxies
338 https://github.com/curl/curl/issues/896
340 1.19 Timeout idle connections from the pool
342 libcurl currently keeps connections in its connection pool for an indefinite
343 period of time, until it either gets reused, gets noticed that it has been
344 closed by the server or gets pruned to make room for a new connection.
346 To reduce overhead (especially for when we add monitoring of the connections
347 in the pool), we should introduce a timeout so that connections that have
348 been idle for N seconds get closed.
350 1.20 SRV and URI DNS records
352 Offer support for resolving SRV and URI DNS records for libcurl to know which
353 server to connect to for various protocols (including HTTP!).
355 1.21 API for URL parsing/splitting
357 libcurl has always parsed URLs internally and never exposed any API or
358 features to allow applications to do it. Still most or many applications
359 using libcurl need that ability. In polls to users, we've learned that many
360 libcurl users would like to see and use such an API.
362 1.23 Offer API to flush the connection pool
364 Sometimes applications want to flush all the existing connections kept alive.
365 An API could allow a forced flush or just a forced loop that would properly
366 close all connections that have been closed by the server already.
368 1.24 TCP Fast Open for windows
370 libcurl supports the CURLOPT_TCP_FASTOPEN option since 7.49.0 for Linux and
371 Mac OS. Windows supports TCP Fast Open starting with Windows 10, version 1607
372 and we should add support for it.
374 1.25 Remove the generated include file
376 When curl and libcurl are built, one of the public include files are
377 generated and is populated with a set of defines that are derevid from sizes
378 and constants for the particular target architecture that build is made. For
379 platforms that can select between 32 bit and 64 bit at build time, this
380 approach makes the libcurl build only create a set of public headers suitable
381 for one of the architectures and not both. If you build libcurl for such a
382 platform and you want to allow applications to get built using either 32/64
383 version, you must generate the libcurl headers once for each setup and you
384 must then add a replacement curl header that would itself select the correct
385 32 or 64 bit specific header as necessary.
387 Your curl/curl.h alternative could then look like (replace with suitable CPP
391 #include <curl32/curl.h>
392 #else /* ARCH_64bit */
393 #include <curl64/curl.h>
396 A fix would either (A) fix the 32/64 setup automatically or even better (B)
397 work away the architecture specific defines from the headers so that they can
398 be used for all architectures independently of what libcurl was built for.
401 2. libcurl - multi interface
403 2.1 More non-blocking
405 Make sure we don't ever loop because of non-blocking sockets returning
406 EWOULDBLOCK or similar. Blocking cases include:
408 - Name resolves on non-windows unless c-ares or the threaded resolver is used
409 - HTTP proxy CONNECT operations
410 - SOCKS proxy handshakes
413 - The "DONE" operation (post transfer protocol-specific actions) for the
414 protocols SFTP, SMTP, FTP. Fixing Curl_done() for this is a worthy task.
416 2.2 Better support for same name resolves
418 If a name resolve has been initiated for name NN and a second easy handle
419 wants to resolve that name as well, make it wait for the first resolve to end
420 up in the cache instead of doing a second separate resolve. This is
421 especially needed when adding many simultaneous handles using the same host
422 name when the DNS resolver can get flooded.
424 2.3 Non-blocking curl_multi_remove_handle()
426 The multi interface has a few API calls that assume a blocking behavior, like
427 add_handle() and remove_handle() which limits what we can do internally. The
428 multi API need to be moved even more into a single function that "drives"
429 everything in a non-blocking manner and signals when something is done. A
430 remove or add would then only ask for the action to get started and then
431 multi_perform() etc still be called until the add/remove is completed.
433 2.4 Split connect and authentication process
435 The multi interface treats the authentication process as part of the connect
436 phase. As such any failures during authentication won't trigger the relevant
437 QUIT or LOGOFF for protocols such as IMAP, POP3 and SMTP.
439 2.5 Edge-triggered sockets should work
441 The multi_socket API should work with edge-triggered socket events. One of
442 the internal actions that need to be improved for this to work perfectly is
443 the 'maxloops' handling in transfer.c:readwrite_data().
447 3.1 Update date and version in man pages
449 'maketgz' or another suitable script could update the .TH sections of the man
450 pages at release time to use the current date and curl/libcurl version
453 3.2 Provide cmake config-file
455 A config-file package is a set of files provided by us to allow applications
456 to write cmake scripts to find and use libcurl easier. See
457 https://github.com/curl/curl/issues/885
463 HOST is a command for a client to tell which host name to use, to offer FTP
464 servers named-based virtual hosting:
466 https://tools.ietf.org/html/rfc7151
468 4.2 Alter passive/active on failure and retry
470 When trying to connect passively to a server which only supports active
471 connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
472 connection. There could be a way to fallback to an active connection (and
473 vice versa). https://curl.haxx.se/bug/feature.cgi?id=1754793
475 4.3 Earlier bad letter detection
477 Make the detection of (bad) %0d and %0a codes in FTP URL parts earlier in the
478 process to avoid doing a resolve and connect in vain.
480 4.4 REST for large files
482 REST fix for servers not behaving well on >2GB requests. This should fail if
483 the server doesn't set the pointer to the requested index. The tricky
484 (impossible?) part is to figure out if the server did the right thing or not.
488 FTP ASCII transfers do not follow RFC959. They don't convert the data
491 4.6 GSSAPI via Windows SSPI
493 In addition to currently supporting the SASL GSSAPI mechanism (Kerberos V5)
494 via third-party GSS-API libraries, such as Heimdal or MIT Kerberos, also add
495 support for GSSAPI authentication via Windows SSPI.
497 4.7 STAT for LIST without data connection
499 Some FTP servers allow STAT for listing directories instead of using LIST, and
500 the response is then sent over the control connection instead of as the
501 otherwise usedw data connection: http://www.nsftools.com/tips/RawFTP.htm#STAT
503 This is not detailed in any FTP specification.
507 5.1 Better persistency for HTTP 1.0
509 "Better" support for persistent connections over HTTP 1.0
510 https://curl.haxx.se/bug/feature.cgi?id=1089001
512 5.2 support FF3 sqlite cookie files
514 Firefox 3 is changing from its former format to a a sqlite database instead.
515 We should consider how (lib)curl can/should support this.
516 https://curl.haxx.se/bug/feature.cgi?id=1871388
518 5.3 Rearrange request header order
520 Server implementors often make an effort to detect browser and to reject
521 clients it can detect to not match. One of the last details we cannot yet
522 control in libcurl's HTTP requests, which also can be exploited to detect
523 that libcurl is in fact used even when it tries to impersonate a browser, is
524 the order of the request headers. I propose that we introduce a new option in
525 which you give headers a value, and then when the HTTP request is built it
526 sorts the headers based on that number. We could then have internally created
527 headers use a default value so only headers that need to be moved have to be
530 5.4 HTTP Digest using SHA-256
532 RFC 7616 introduces an update to the HTTP Digest authentication
533 specification, which amongst other thing defines how new digest algorithms
534 can be used instead of MD5 which is considered old and not recommanded.
536 See https://tools.ietf.org/html/rfc7616 and
537 https://github.com/curl/curl/issues/1018
541 Add the ability to specify the preferred authentication mechanism to use by
542 using ;auth=<mech> in the login part of the URL.
546 http://test:pass;auth=NTLM@example.com would be equivalent to specifying --user
547 test:pass;auth=NTLM or --user test:pass --ntlm from the command line.
549 Additionally this should be implemented for proxy base URLs as well.
551 5.6 Refuse "downgrade" redirects
553 See https://github.com/curl/curl/issues/226
555 Consider a way to tell curl to refuse to "downgrade" protocol with a redirect
556 and/or possibly a bit that refuses redirect to change protocol completely.
558 5.7 Brotli compression
560 Brotli compression performs better than gzip and is being implemented by
561 browsers and servers widely. The algorithm: https://github.com/google/brotli
562 The Firefox bug: https://bugzilla.mozilla.org/show_bug.cgi?id=366559
566 The standardization process of QUIC has been taken to the IETF and can be
567 followed on the [IETF QUIC Mailing
568 list](https://www.ietf.org/mailman/listinfo/quic). I'd like us to get on the
569 bandwagon. Ideally, this would be done with a separate library/project to
570 handle the binary/framing layer in a similar fashion to how HTTP/2 is
571 implemented. This, to allow other projects to benefit from the work and to
572 thus broaden the interest and chance of others to participate.
574 5.9 Improve formpost API
576 Revamp the formpost API and making something that is easier to use and
579 https://github.com/curl/curl/wiki/formpost-API-redesigned
581 5.10 Leave secure cookies alone
583 Non-secure origins (HTTP sites) should not be allowed to set or modify
584 cookies with the 'secure' property:
586 https://tools.ietf.org/html/draft-ietf-httpbis-cookie-alone-01
588 5.11 Chunked transfer multipart formpost
590 For a case where the file is being made during the upload is progressing
591 (like passed on stdin to the curl tool), we cannot know the size before-hand
592 and we rather not read the entire thing into memory before it can start the
595 https://github.com/curl/curl/issues/1139
599 HTTP defines an OPTIONS method that can be sent with an asterisk option like
600 "OPTIONS *" to ask about options from the server and not a specific URL
601 resource. https://tools.ietf.org/html/rfc7230#section-5.3.4
603 libcurl as it currently works will always sent HTTP methods with a path that
604 starts with a slash so there's no way for an application to send a proper
605 "OPTIONS *" using libcurl. This should be fixed.
607 I can't think of any other non-slash paths we should support so it will
608 probably make sense to add a new boolean option for issuign an "OPTIONS *"
609 request. CURLOPT_OPTIONSASTERISK perhaps (and a corresponding command line
612 See https://github.com/curl/curl/issues/1280
619 Reading input (to send to the remote server) on stdin is a crappy solution for
620 library purposes. We need to invent a good way for the application to be able
621 to provide the data to send.
623 6.2 ditch telnet-specific select
625 Move the telnet support's network select() loop go away and merge the code
626 into the main transfer loop. Until this is done, the multi interface won't
629 6.3 feature negotiation debug data
631 Add telnet feature negotiation data to the debug callback as header data.
633 6.4 send data in chunks
635 Currently, telnet sends data one byte at a time. This is fine for interactive
636 use, but inefficient for any other. Sent data should be sent in larger
643 Add support for pipelining emails.
645 7.2 Enhanced capability support
647 Add the ability, for an application that uses libcurl, to obtain the list of
648 capabilities returned from the EHLO command.
650 7.3 Add CURLOPT_MAIL_CLIENT option
652 Rather than use the URL to specify the mail client string to present in the
653 HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
654 specifying this data as the URL is non-standard and to be honest a bit of a
657 Please see the following thread for more information:
658 https://curl.haxx.se/mail/lib-2012-05/0178.html
665 Add support for pipelining commands.
667 8.2 Enhanced capability support
669 Add the ability, for an application that uses libcurl, to obtain the list of
670 capabilities returned from the CAPA command.
674 9.1 Enhanced capability support
676 Add the ability, for an application that uses libcurl, to obtain the list of
677 capabilities returned from the CAPABILITY command.
681 10.1 SASL based authentication mechanisms
683 Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
684 to an LDAP server. However, this function sends username and password details
685 using the simple authentication mechanism (as clear text). However, it should
686 be possible to use ldap_bind_s() instead specifying the security context
687 information ourselves.
691 11.1 File listing support
693 Add support for listing the contents of a SMB share. The output should probably
694 be the same as/similar to FTP.
696 11.2 Honor file timestamps
698 The timestamp of the transferred file should reflect that of the original file.
702 Currently the SMB authentication uses NTLMv1.
704 11.4 Create remote directories
706 Support for creating remote directories when uploading a file to a directory
707 that doesn't exist on the server, just like --ftp-create-dirs.
713 There's no RFC for the protocol or an URI/URL format. An implementation
714 should most probably use an existing rsync library, such as librsync.
718 13.1 Disable specific versions
720 Provide an option that allows for disabling specific SSL versions, such as
721 SSLv2 https://curl.haxx.se/bug/feature.cgi?id=1767276
723 13.2 Provide mutex locking API
725 Provide a libcurl API for setting mutex callbacks in the underlying SSL
726 library, so that the same application code can use mutex-locking
727 independently of OpenSSL or GnutTLS being used.
729 13.3 Evaluate SSL patches
731 Evaluate/apply Gertjan van Wingerde's SSL patches:
732 https://curl.haxx.se/mail/lib-2004-03/0087.html
734 13.4 Cache/share OpenSSL contexts
736 "Look at SSL cafile - quick traces look to me like these are done on every
737 request as well, when they should only be necessary once per SSL context (or
738 once per handle)". The major improvement we can rather easily do is to make
739 sure we don't create and kill a new SSL "context" for every request, but
740 instead make one for every connection and re-use that SSL context in the same
741 style connections are re-used. It will make us use slightly more memory but
742 it will libcurl do less creations and deletions of SSL contexts.
744 Technically, the "caching" is probably best implemented by getting added to
745 the share interface so that easy handles who want to and can reuse the
746 context specify that by sharing with the right properties set.
748 https://github.com/curl/curl/issues/1110
750 13.5 Export session ids
752 Add an interface to libcurl that enables "session IDs" to get
753 exported/imported. Cris Bailiff said: "OpenSSL has functions which can
754 serialise the current SSL state to a buffer of your choice, and recover/reset
755 the state from such a buffer at a later date - this is used by mod_ssl for
756 apache to implement and SSL session ID cache".
758 13.6 Provide callback for cert verification
760 OpenSSL supports a callback for customised verification of the peer
761 certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
762 it be? There's so much that could be done if it were!
764 13.7 improve configure --with-ssl
766 make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
771 DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
772 keys and certs over DNS using DNSSEC as an alternative to the CA model.
773 https://www.rfc-editor.org/rfc/rfc6698.txt
775 An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
776 (https://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple
777 approach. See Daniel's comments:
778 https://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the
779 correct library to base this development on.
781 Björn Stenberg wrote a separate initial take on DANE that was never
784 13.10 Support SSLKEYLOGFILE
786 When used, Firefox and Chrome dumps their master TLS keys to the file name
787 this environment variable specifies. This allows tools like for example
788 Wireshark to capture and decipher TLS traffic to/from those clients. libcurl
789 could be made to support this more widely (presumably this already works when
790 built with NSS). Peter Wu made a OpenSSL preload to make possible that can be
791 used as inspiration and guidance
792 https://git.lekensteyn.nl/peter/wireshark-notes/tree/src/sslkeylog.c
794 13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
796 CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root
797 certificates when comparing the pinned keys. Therefore it is not compatible
798 with "HTTP Public Key Pinning" as there also intermediate and root certificates
799 can be pinned. This is very useful as it prevents webadmins from "locking
800 themself out of their servers".
802 Adding this feature would make curls pinning 100% compatible to HPKP and allow
803 more flexible pinning.
807 "HTTP Strict Transport Security" is TOFU (trust on first use), time-based
808 features indicated by a HTTP header send by the webserver. It is widely used
809 in browsers and it's purpose is to prevent insecure HTTP connections after
810 a previous HTTPS connection. It protects against SSLStripping attacks.
812 Doc: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
813 RFC 6797: https://tools.ietf.org/html/rfc6797
817 "HTTP Public Key Pinning" is TOFU (trust on first use), time-based
818 features indicated by a HTTP header send by the webserver. It's purpose is
819 to prevent Man-in-the-middle attacks by trusted CAs by allowing webadmins
820 to specify which CAs/certificates/public keys to trust when connection to
823 It can be build based on PINNEDPUBLICKEY.
825 Wikipedia: https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
826 OWASP: https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning
827 Doc: https://developer.mozilla.org/de/docs/Web/Security/Public_Key_Pinning
828 RFC: https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21
832 14.1 SSL engine stuff
834 Is this even possible?
836 14.2 check connection
838 Add a way to check if the connection seems to be alive, to correspond to the
839 SSL_peak() way we use with OpenSSL.
843 15.1 Add support for client certificate authentication
845 WinSSL/SChannel currently makes use of the OS-level system and user
846 certificate and private key stores. This does not allow the application
847 or the user to supply a custom client certificate using curl or libcurl.
849 Therefore support for the existing -E/--cert and --key options should be
850 implemented by supplying a custom certificate to the SChannel APIs, see:
851 - Getting a Certificate for Schannel
852 https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx
854 15.2 Add support for custom server certificate validation
856 WinSSL/SChannel currently makes use of the OS-level system and user
857 certificate trust store. This does not allow the application or user to
858 customize the server certificate validation process using curl or libcurl.
860 Therefore support for the existing --cacert or --capath options should be
861 implemented by supplying a custom certificate to the SChannel APIs, see:
862 - Getting a Certificate for Schannel
863 https://msdn.microsoft.com/en-us/library/windows/desktop/aa375447.aspx
865 15.3 Add support for the --ciphers option
867 The cipher suites used by WinSSL/SChannel are configured on an OS-level
868 instead of an application-level. This does not allow the application or
869 the user to customize the configured cipher suites using curl or libcurl.
871 Therefore support for the existing --ciphers option should be implemented
872 by mapping the OpenSSL/GnuTLS cipher suites to the SChannel APIs, see
873 - Specifying Schannel Ciphers and Cipher Strengths
874 https://msdn.microsoft.com/en-us/library/windows/desktop/aa380161.aspx
878 16.1 Other authentication mechanisms
880 Add support for other authentication mechanisms such as OLP,
881 GSS-SPNEGO and others.
883 16.2 Add QOP support to GSSAPI authentication
885 Currently the GSSAPI authentication only supports the default QOP of auth
886 (Authentication), whilst Kerberos V5 supports both auth-int (Authentication
887 with integrity protection) and auth-conf (Authentication with integrity and
890 16.3 Support binary messages (i.e.: non-base64)
892 Mandatory to support LDAP SASL authentication.
899 SSH is a perfectly fine multiplexed protocols which would allow libcurl to do
900 multiple parallel transfers from the same host using the same connection,
901 much in the same spirit as HTTP/2 does. libcurl however does not take
902 advantage of that ability but will instead always create a new connection for
903 new transfers even if an existing connection already exists to the host.
905 To fix this, libcurl would have to detect an existing connection and "attach"
906 the new transfer to the existing one.
908 17.2 SFTP performance
910 libcurl's SFTP transfer performance is sub par and can be improved, mostly by
911 the approach mentioned in "1.6 Modified buffer size approach".
913 17.3 Support better than MD5 hostkey hash
915 libcurl offers the CURLOPT_SSH_HOST_PUBLIC_KEY_MD5 option for verifying the
916 server's key. MD5 is generally being deprecated so we should implement
917 support for stronger hashing algorithms. libssh2 itself is what provides this
918 underlying functionality and it supports at least SHA-1 as an alternative.
919 SHA-1 is also being deprecated these days so we should consider workign with
920 libssh2 to instead offer support for SHA-256 or similar.
923 18. Command line tool
927 "curl --sync http://example.com/feed[1-100].rss" or
928 "curl --sync http://example.net/{index,calendar,history}.html"
930 Downloads a range or set of URLs using the remote name, but only if the
931 remote file is newer than the local file. A Last-Modified HTTP date header
932 should also be used to set the mod date on the downloaded file.
936 Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
937 This is easily scripted though.
939 18.3 prevent file overwriting
941 Add an option that prevents curl from overwriting existing local files. When
942 used, and there already is an existing file with the target file name
943 (either -O or -o), a number should be appended (and increased if already
944 existing). So that index.html becomes first index.html.1 and then
947 18.4 simultaneous parallel transfers
949 The client could be told to use maximum N simultaneous parallel transfers and
950 then just make sure that happens. It should of course not make more than one
951 connection to the same remote host. This would require the client to use the
952 multi interface. https://curl.haxx.se/bug/feature.cgi?id=1558595
954 Using the multi interface would also allow properly using parallel transfers
955 with HTTP/2 and supporting HTTP/2 server push from the command line.
957 18.5 provide formpost headers
959 Extending the capabilities of the multipart formposting. How about leaving
960 the ';type=foo' syntax as it is and adding an extra tag (headers) which
961 works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
962 fil1.hdr contains extra headers like
964 Content-Type: text/plain; charset=KOI8-R"
965 Content-Transfer-Encoding: base64
966 X-User-Comment: Please don't use browser specific HTML code
968 which should overwrite the program reasonable defaults (plain/text,
971 18.6 warning when setting an option
973 Display a warning when libcurl returns an error when setting an option.
974 This can be useful to tell when support for a particular feature hasn't been
975 compiled into the library.
977 18.7 warning when sending binary output to terminal
979 Provide a way that prompts the user for confirmation before binary data is
980 sent to the terminal, much in the style 'less' does it.
982 18.8 offer color-coded HTTP header output
984 By offering different color output on the header name and the header
985 contents, they could be made more readable and thus help users working on
988 18.9 Choose the name of file in braces for complex URLs
990 When using braces to download a list of URLs and you use complicated names
991 in the list of alternatives, it could be handy to allow curl to use other
994 Consider a way to offer that. Possibly like
995 {partURL1:name1,partURL2:name2,partURL3:name3} where the name following the
996 colon is the output name.
998 See https://github.com/curl/curl/issues/221
1000 18.10 improve how curl works in a windows console window
1002 If you pull the scrollbar when transferring with curl in a Windows console
1003 window, the transfer is interrupted and can get disconnected. This can
1004 probably be improved. See https://github.com/curl/curl/issues/322
1006 18.11 -w output to stderr
1008 -w is quite useful, but not to those of us who use curl without -o or -O
1009 (such as for scripting through a higher level language). It would be nice to
1010 have an option that is exactly like -w but sends it to stderr
1011 instead. Proposed name: --write-stderr. See
1012 https://github.com/curl/curl/issues/613
1014 18.12 keep running, read instructions from pipe/socket
1016 Provide an option that makes curl not exit after the last URL (or even work
1017 without a given URL), and then make it read instructions passed on a pipe or
1018 over a socket to make further instructions so that a second subsequent curl
1019 invoke can talk to the still running instance and ask for transfers to get
1020 done, and thus maintain its connection pool, DNS cache and more.
1022 18.13 support metalink in http headers
1024 Curl has support for downloading a metalink xml file, processing it, and then
1025 downloading the target of the metalink. This is done via the --metalink option.
1026 It would be nice if metalink also supported downloading via metalink
1027 information that is stored in HTTP headers (RFC 6249). Theoretically this could
1028 also be supported with the --metalink option.
1030 See https://tools.ietf.org/html/rfc6249
1032 See also https://lists.gnu.org/archive/html/bug-wget/2015-06/msg00034.html for
1033 an implematation of this in wget.
1035 18.14 --fail without --location should treat 3xx as a failure
1037 To allow a command line like this to detect a redirect and consider it a
1040 curl -v --fail -O https://example.com/curl-7.48.0.tar.gz
1042 ... --fail must treat 3xx responses as failures too. The least problematic
1043 way to implement this is probably to add that new logic in the command line
1044 tool only and not in the underlying CURLOPT_FAILONERROR logic.
1046 18.15 --retry should resume
1048 When --retry is used and curl actually retries transfer, it should use the
1049 already transfered data and do a resumed transfer for the rest (when
1050 possible) so that it doesn't have to transfer the same data again that was
1051 already tranfered before the retry.
1053 See https://github.com/curl/curl/issues/1084
1055 18.16 send only part of --data
1057 When the user only wants to send a small piece of the data provided with
1058 --data or --data-binary, like when that data is a huge file, consider a way
1059 to specify that curl should only send a piece of that. One suggested syntax
1060 would be: "--data-binary @largefile.zip!1073741823-2147483647".
1062 See https://github.com/curl/curl/issues/1200
1064 18.17 consider file name from the redirected URL with -O ?
1066 When a user gives a URL and uses -O, and curl follows a redirect to a new
1067 URL, the file name is not extracted and used from the newly redirected-to URL
1068 even if the new URL may have a much more sensible file name.
1070 This is clearly documented and helps for security since there's no surprise
1071 to users which file name that might get overwritten. But maybe a new option
1072 could allow for this or maybe -J should imply such a treatment as well as -J
1073 already allows for the server to decide what file name to use so it already
1074 provides the "may overwrite any file" risk.
1076 This is extra tricky if the original URL has no file name part at all since
1077 then the current code path will error out with an error message, and we can't
1078 *know* already at that point if curl will be redirected to a URL that has a
1081 See https://github.com/curl/curl/issues/1241
1087 Consider extending 'roffit' to produce decent ASCII output, and use that
1088 instead of (g)nroff when building src/tool_hugehelp.c
1090 19.2 Enable PIE and RELRO by default
1092 Especially when having programs that execute curl via the command line, PIE
1093 renders the exploitation of memory corruption vulnerabilities a lot more
1094 difficult. This can be attributed to the additional information leaks being
1095 required to conduct a successful attack. RELRO, on the other hand, masks
1096 different binary sections like the GOT as read-only and thus kills a handful
1097 of techniques that come in handy when attackers are able to arbitrarily
1098 overwrite memory. A few tests showed that enabling these features had close
1099 to no impact, neither on the performance nor on the general functionality of
1107 Make our own version of stunnel for simple port forwarding to enable HTTPS
1108 and FTP-SSL tests without the stunnel dependency, and it could allow us to
1109 provide test tools built with either OpenSSL or GnuTLS
1111 20.2 nicer lacking perl message
1113 If perl wasn't found by the configure script, don't attempt to run the tests
1114 but explain something nice why it doesn't.
1116 20.3 more protocols supported
1118 Extend the test suite to include more protocols. The telnet could just do FTP
1119 or http operations (for which we have test servers).
1121 20.4 more platforms supported
1123 Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
1124 fork()s and it should become even more portable.
1126 20.5 Add support for concurrent connections
1128 Tests 836, 882 and 938 were designed to verify that separate connections aren't
1129 used when using different login credentials in protocols that shouldn't re-use
1130 a connection under such circumstances.
1132 Unfortunately, ftpserver.pl doesn't appear to support multiple concurrent
1133 connections. The read while() loop seems to loop until it receives a disconnect
1134 from the client, where it then enters the waiting for connections loop. When
1135 the client opens a second connection to the server, the first connection hasn't
1136 been dropped (unless it has been forced - which we shouldn't do in these tests)
1137 and thus the wait for connections loop is never entered to receive the second
1140 20.6 Use the RFC6265 test suite
1142 A test suite made for HTTP cookies (RFC 6265) by Adam Barth is available at
1143 https://github.com/abarth/http-state/tree/master/tests
1145 It'd be really awesome if someone would write a script/setup that would run
1146 curl with that test suite and detect deviances. Ideally, that would even be
1147 incorporated into our regular test suite.
1150 21. Next SONAME bump
1152 21.1 http-style HEAD output for FTP
1154 #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
1155 from being output in NOBODY requests over FTP
1157 21.2 combine error codes
1159 Combine some of the error codes to remove duplicates. The original
1160 numbering should not be changed, and the old identifiers would be
1161 macroed to the new ones in an CURL_NO_OLDIES section to help with
1162 backward compatibility.
1164 Candidates for removal and their replacements:
1166 CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
1168 CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
1170 CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
1172 CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
1174 CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
1176 CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
1178 CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
1180 CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
1182 21.3 extend CURLOPT_SOCKOPTFUNCTION prototype
1184 The current prototype only provides 'purpose' that tells what the
1185 connection/socket is for, but not any protocol or similar. It makes it hard
1186 for applications to differentiate on TCP vs UDP and even HTTP vs FTP and
1189 22. Next major release
1191 22.1 cleanup return codes
1193 curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
1194 CURLMcode. These should be changed to be the same.
1196 22.2 remove obsolete defines
1198 remove obsolete defines from curl/curl.h
1202 make several functions use size_t instead of int in their APIs
1204 22.4 remove several functions
1206 remove the following functions from the public API:
1210 curl_mprintf (and variations)
1216 They will instead become curlx_ - alternatives. That makes the curl app
1217 still capable of using them, by building with them from source.
1219 These functions have no purpose anymore:
1223 curl_multi_socket_all
1225 22.5 remove CURLOPT_FAILONERROR
1227 Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
1228 internally. Let the app judge success or not for itself.
1230 22.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
1232 Remove support for a global DNS cache. Anything global is silly, and we
1233 already offer the share interface for the same functionality but done
1236 22.7 remove progress meter from libcurl
1238 The internally provided progress meter output doesn't belong in the library.
1239 Basically no application wants it (apart from curl) but instead applications
1240 can and should do their own progress meters using the progress callback.
1242 The progress callback should then be bumped as well to get proper 64bit
1243 variable types passed to it instead of doubles so that big files work
1246 22.8 remove 'curl_httppost' from public
1248 curl_formadd() was made to fill in a public struct, but the fact that the
1249 struct is public is never really used by application for their own advantage
1250 but instead often restricts how the form functions can or can't be modified.
1252 Changing them to return a private handle will benefit the implementation and
1253 allow us much greater freedoms while still maintaining a solid API and ABI.