5 \___|\___/|_| \_\_____|
7 Things that could be nice to do in the future
9 Things to do in project cURL. Please tell us what you think, contribute and
10 send us patches that improve things!
12 All bugs documented in the KNOWN_BUGS document are subject for fixing!
17 1.4 signal-based resolver timeouts
18 1.5 get rid of PATH_MAX
19 1.6 Happy Eyeball dual stack connect
20 1.7 Modified buffer size approach
22 2. libcurl - multi interface
24 2.2 Fix HTTP Pipelining for PUT
31 4.2 Alter passive/active on failure and retry
32 4.3 Earlier bad letter detection
33 4.4 REST for large files
38 5.1 Better persistency for HTTP 1.0
39 5.2 support FF3 sqlite cookie files
40 5.3 Rearrange request header order
45 6.2 ditch telnet-specific select
46 6.3 feature negotiation debug data
47 6.4 send data in chunks
51 7.2 Graceful base64 decoding failure
52 7.3 Enhanced capability support
56 8.2 Graceful base64 decoding failure
57 8.3 Enhanced capability support
60 9.1 Graceful base64 decoding failure
61 9.2 Enhanced capability support
64 10.1 SASL based authentication mechanisms
70 12.1 Disable specific versions
71 12.2 Provide mutex locking API
72 12.3 Evaluate SSL patches
73 12.4 Cache OpenSSL contexts
74 12.5 Export session ids
75 12.6 Provide callback for cert verification
76 12.7 Support other SSL libraries
77 12.8 improve configure --with-ssl
85 14.1 Other authentication mechanisms
90 15.3 prevent file overwriting
91 15.4 simultaneous parallel transfers
92 15.5 provide formpost headers
93 15.6 url-specific options
94 15.7 warning when setting an option
95 15.8 IPv6 addresses with globbing
102 17.2 nicer lacking perl message
103 17.3 more protocols supported
104 17.4 more platforms supported
107 18.1 http-style HEAD output for ftp
108 18.2 combine error codes
109 18.3 extend CURLOPT_SOCKOPTFUNCTION prototype
111 19. Next major release
112 19.1 cleanup return codes
113 19.2 remove obsolete defines
115 19.4 remove several functions
116 19.5 remove CURLOPT_FAILONERROR
117 19.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
118 19.7 remove progress meter from libcurl
119 19.8 remove 'curl_httppost' from public
120 19.9 have form functions use CURL handle argument
121 19.10 Add CURLOPT_MAIL_CLIENT option
123 ==============================================================================
127 1.2 More data sharing
129 curl_share_* functions already exist and work, and they can be extended to
130 share more. For example, enable sharing of the ares channel and the
135 Use 'struct lifreq' and SIOCGLIFADDR instead of 'struct ifreq' and
136 SIOCGIFADDR on newer Solaris versions as they claim the latter is obsolete.
137 To support ipv6 interface addresses for network interfaces properly.
139 1.4 signal-based resolver timeouts
141 libcurl built without an asynchronous resolver library uses alarm() to time
142 out DNS lookups. When a timeout occurs, this causes libcurl to jump from the
143 signal handler back into the library with a sigsetjmp, which effectively
144 causes libcurl to continue running within the signal handler. This is
145 non-portable and could cause problems on some platforms. A discussion on the
146 problem is available at http://curl.haxx.se/mail/lib-2008-09/0197.html
148 Also, alarm() provides timeout resolution only to the nearest second. alarm
149 ought to be replaced by setitimer on systems that support it.
151 1.5 get rid of PATH_MAX
153 Having code use and rely on PATH_MAX is not nice:
154 http://insanecoding.blogspot.com/2007/11/pathmax-simply-isnt.html
156 Currently the SSH based code uses it a bit, but to remove PATH_MAX from there
157 we need libssh2 to properly tell us when we pass in a too small buffer and
158 its current API (as of libssh2 1.2.7) doesn't.
160 1.6 Happy Eyeball dual stack connect
162 In order to make alternative technologies not suffer when transitioning, like
163 when introducing IPv6 as an alternative to IPv4 and there are more than one
164 option existing simultaneously there are reasons to reconsider internal
167 To make libcurl do blazing fast IPv6 in a dual-stack configuration, this needs
170 http://tools.ietf.org/html/rfc6555
172 1.7 Modified buffer size approach
174 Current libcurl allocates a fixed 16K size buffer for download and an
175 additional 16K for upload. They are always unconditionally part of the easy
176 handle. If CRLF translations are requested, an additional 32K "scratch
177 buffer" is allocated. A total of 64K transfer buffers in the worst case.
179 First, while the handles are not actually in use these buffers could be freed
180 so that lingering handles just kept in queues or whatever waste less memory.
182 Secondly, SFTP is a protocol that needs to handle many ~30K blocks at once
183 since each need to be individually acked and therefore libssh2 must be
184 allowed to send (or receive) many separate ones in parallel to achieve high
185 transfer speeds. A current libcurl build with a 16K buffer makes that
186 impossible, but one with a 512K buffer will reach MUCH faster transfers. But
187 allocating 512K unconditionally for all buffers just in case they would like
188 to do fast SFTP transfers at some point is not a good solution either.
190 Dynamically allocate buffer size depending on protocol in use in combination
191 with freeing it after each individual transfer? Other suggestions?
194 2. libcurl - multi interface
196 2.1 More non-blocking
198 Make sure we don't ever loop because of non-blocking sockets returning
199 EWOULDBLOCK or similar. Blocking cases include:
201 - Name resolves on non-windows unless c-ares is used
202 - NSS SSL connections
203 - HTTP proxy CONNECT operations
204 - SOCKS proxy handshakes
207 - The "DONE" operation (post transfer protocol-specific actions) for the
208 protocols SFTP, SMTP, FTP. Fixing Curl_done() for this is a worthy task.
210 2.2 Fix HTTP Pipelining for PUT
212 HTTP Pipelining can be a way to greatly enhance performance for multiple
213 serial requests and currently libcurl only supports that for HEAD and GET
214 requests but it should also be possible for PUT.
226 HOST is a suggested command in the works for a client to tell which host name
227 to use, to offer FTP servers named-based virtual hosting:
229 http://tools.ietf.org/html/draft-hethmon-mcmurray-ftp-hosts-11
231 4.2 Alter passive/active on failure and retry
233 When trying to connect passively to a server which only supports active
234 connections, libcurl returns CURLE_FTP_WEIRD_PASV_REPLY and closes the
235 connection. There could be a way to fallback to an active connection (and
236 vice versa). http://curl.haxx.se/bug/feature.cgi?id=1754793
238 4.3 Earlier bad letter detection
240 Make the detection of (bad) %0d and %0a codes in FTP url parts earlier in the
241 process to avoid doing a resolve and connect in vain.
243 4.4 REST for large files
245 REST fix for servers not behaving well on >2GB requests. This should fail if
246 the server doesn't set the pointer to the requested index. The tricky
247 (impossible?) part is to figure out if the server did the right thing or not.
249 4.5 FTP proxy support
251 Support the most common FTP proxies, Philip Newton provided a list allegedly
252 from ncftp. This is not a subject without debate, and is probably not really
253 suitable for libcurl. http://curl.haxx.se/mail/archive-2003-04/0126.html
257 FTP ASCII transfers do not follow RFC959. They don't convert the data
262 5.1 Better persistency for HTTP 1.0
264 "Better" support for persistent connections over HTTP 1.0
265 http://curl.haxx.se/bug/feature.cgi?id=1089001
267 5.2 support FF3 sqlite cookie files
269 Firefox 3 is changing from its former format to a a sqlite database instead.
270 We should consider how (lib)curl can/should support this.
271 http://curl.haxx.se/bug/feature.cgi?id=1871388
273 5.3 Rearrange request header order
275 Server implementors often make an effort to detect browser and to reject
276 clients it can detect to not match. One of the last details we cannot yet
277 control in libcurl's HTTP requests, which also can be exploited to detect
278 that libcurl is in fact used even when it tries to impersonate a browser, is
279 the order of the request headers. I propose that we introduce a new option in
280 which you give headers a value, and then when the HTTP request is built it
281 sorts the headers based on that number. We could then have internally created
282 headers use a default value so only headers that need to be moved have to be
287 The first drafts for HTTP2 have been published
288 (http://tools.ietf.org/html/draft-ietf-httpbis-http2-03) and is so far based
289 on SPDY (http://www.chromium.org/spdy) designs and experiences. Chances are
290 it will end up in that style. Chrome and Firefox already support SPDY and
291 lots of web services do.
293 It would make sense to implement SPDY support now and later transition into
294 or add HTTP2 support as well.
296 We should base or HTTP2/SPDY work on a 3rd party library for the protocol
297 fiddling. The Spindy library (http://spindly.haxx.se/) was an attempt to make
298 such a library with an API suitable for use by libcurl but that effort has
299 more or less stalled. spdylay (https://github.com/tatsuhiro-t/spdylay) may
300 be a better option, either used directly or wrapped with a more spindly-like
308 Reading input (to send to the remote server) on stdin is a crappy solution for
309 library purposes. We need to invent a good way for the application to be able
310 to provide the data to send.
312 6.2 ditch telnet-specific select
314 Move the telnet support's network select() loop go away and merge the code
315 into the main transfer loop. Until this is done, the multi interface won't
318 6.3 feature negotiation debug data
320 Add telnet feature negotiation data to the debug callback as header data.
322 6.4 send data in chunks
324 Currently, telnet sends data one byte at a time. This is fine for interactive
325 use, but inefficient for any other. Sent data should be sent in larger
332 Add support for pipelining emails.
334 7.2 Graceful base64 decoding failure
336 Rather than shutting down the session and returning an error when the
337 decoding of a base64 encoded authentication response fails, we should
338 gracefully shutdown the authentication process by sending a * response to the
339 server as per RFC4954.
341 7.3 Enhanced capability support
343 Add the ability, for an application that uses libcurl, to obtain the list of
344 capabilities returned from the EHLO command.
350 Add support for pipelining commands.
352 8.2 Graceful base64 decoding failure
354 Rather than shutting down the session and returning an error when the
355 decoding of a base64 encoded authentication response fails, we should
356 gracefully shutdown the authentication process by sending a * response to the
357 server as per RFC5034.
359 8.3 Enhanced capability support
361 Add the ability, for an application that uses libcurl, to obtain the list of
362 capabilities returned from the CAPA command.
366 9.1 Graceful base64 decoding failure
368 Rather than shutting down the session and returning an error when the
369 decoding of a base64 encoded authentication response fails, we should
370 gracefully shutdown the authentication process by sending a * response to the
371 server as per RFC3501.
373 9.2 Enhanced capability support
375 Add the ability, for an application that uses libcurl, to obtain the list of
376 capabilities returned from the CAPABILITY command.
380 10.1 SASL based authentication mechanisms
382 Currently the LDAP module only supports ldap_simple_bind_s() in order to bind
383 to an LDAP server. However, this function sends username and password details
384 using the simple authentication mechanism (as clear text). However, it should
385 be possible to use ldap_bind_s() instead specifing the security context
386 information ourselves.
392 There's no RFC for the protocol or an URI/URL format. An implementation
393 should most probably use an existing rsync library, such as librsync.
397 12.1 Disable specific versions
399 Provide an option that allows for disabling specific SSL versions, such as
400 SSLv2 http://curl.haxx.se/bug/feature.cgi?id=1767276
402 12.2 Provide mutex locking API
404 Provide a libcurl API for setting mutex callbacks in the underlying SSL
405 library, so that the same application code can use mutex-locking
406 independently of OpenSSL or GnutTLS being used.
408 12.3 Evaluate SSL patches
410 Evaluate/apply Gertjan van Wingerde's SSL patches:
411 http://curl.haxx.se/mail/lib-2004-03/0087.html
413 12.4 Cache OpenSSL contexts
415 "Look at SSL cafile - quick traces look to me like these are done on every
416 request as well, when they should only be necessary once per ssl context (or
417 once per handle)". The major improvement we can rather easily do is to make
418 sure we don't create and kill a new SSL "context" for every request, but
419 instead make one for every connection and re-use that SSL context in the same
420 style connections are re-used. It will make us use slightly more memory but
421 it will libcurl do less creations and deletions of SSL contexts.
423 12.5 Export session ids
425 Add an interface to libcurl that enables "session IDs" to get
426 exported/imported. Cris Bailiff said: "OpenSSL has functions which can
427 serialise the current SSL state to a buffer of your choice, and recover/reset
428 the state from such a buffer at a later date - this is used by mod_ssl for
429 apache to implement and SSL session ID cache".
431 12.6 Provide callback for cert verification
433 OpenSSL supports a callback for customised verification of the peer
434 certificate, but this doesn't seem to be exposed in the libcurl APIs. Could
435 it be? There's so much that could be done if it were!
437 12.7 Support other SSL libraries
439 Make curl's SSL layer capable of using other free SSL libraries. Such as
440 MatrixSSL (http://www.matrixssl.org/).
442 12.8 improve configure --with-ssl
444 make the configure --with-ssl option first check for OpenSSL, then GnuTLS,
449 DNS-Based Authentication of Named Entities (DANE) is a way to provide SSL
450 keys and certs over DNS using DNSSEC as an alternative to the CA model.
451 http://www.rfc-editor.org/rfc/rfc6698.txt
453 An initial patch was posted by Suresh Krishnaswamy on March 7th 2013
454 (http://curl.haxx.se/mail/lib-2013-03/0075.html) but it was a too simple
455 approach. See Daniel's comments:
456 http://curl.haxx.se/mail/lib-2013-03/0103.html . libunbound may be the
457 correct library to base this development on.
461 13.1 SSL engine stuff
463 Is this even possible?
465 13.2 check connection
467 Add a way to check if the connection seems to be alive, to correspond to the
468 SSL_peak() way we use with OpenSSL.
472 14.1 Other authentication mechanisms
474 Add support for GSSAPI to SMTP, POP3 and IMAP.
480 "curl --sync http://example.com/feed[1-100].rss" or
481 "curl --sync http://example.net/{index,calendar,history}.html"
483 Downloads a range or set of URLs using the remote name, but only if the
484 remote file is newer than the local file. A Last-Modified HTTP date header
485 should also be used to set the mod date on the downloaded file.
489 Globbing support for -d and -F, as in 'curl -d "name=foo[0-9]" URL'.
490 This is easily scripted though.
492 15.3 prevent file overwriting
494 Add an option that prevents cURL from overwriting existing local files. When
495 used, and there already is an existing file with the target file name
496 (either -O or -o), a number should be appended (and increased if already
497 existing). So that index.html becomes first index.html.1 and then
500 15.4 simultaneous parallel transfers
502 The client could be told to use maximum N simultaneous parallel transfers and
503 then just make sure that happens. It should of course not make more than one
504 connection to the same remote host. This would require the client to use the
505 multi interface. http://curl.haxx.se/bug/feature.cgi?id=1558595
507 15.5 provide formpost headers
509 Extending the capabilities of the multipart formposting. How about leaving
510 the ';type=foo' syntax as it is and adding an extra tag (headers) which
511 works like this: curl -F "coolfiles=@fil1.txt;headers=@fil1.hdr" where
512 fil1.hdr contains extra headers like
514 Content-Type: text/plain; charset=KOI8-R"
515 Content-Transfer-Encoding: base64
516 X-User-Comment: Please don't use browser specific HTML code
518 which should overwrite the program reasonable defaults (plain/text,
521 15.6 url-specific options
523 Provide a way to make options bound to a specific URL among several on the
524 command line. Possibly by letting ':' separate options between URLs,
527 curl --data foo --url url.com : \
529 --url url3.com --data foo3
531 (More details: http://curl.haxx.se/mail/archive-2004-07/0133.html)
533 The example would do a POST-GET-POST combination on a single command line.
535 15.7 warning when setting an option
537 Display a warning when libcurl returns an error when setting an option.
538 This can be useful to tell when support for a particular feature hasn't been
539 compiled into the library.
541 15.8 IPv6 addresses with globbing
543 Currently the command line client needs to get url globbing disabled (with
544 -g) for it to support IPv6 numerical addresses. This is a rather silly flaw
545 that should be corrected. It probably involves a smarter detection of the
552 Consider extending 'roffit' to produce decent ASCII output, and use that
553 instead of (g)nroff when building src/tool_hugehelp.c
559 Make our own version of stunnel for simple port forwarding to enable HTTPS
560 and FTP-SSL tests without the stunnel dependency, and it could allow us to
561 provide test tools built with either OpenSSL or GnuTLS
563 17.2 nicer lacking perl message
565 If perl wasn't found by the configure script, don't attempt to run the tests
566 but explain something nice why it doesn't.
568 17.3 more protocols supported
570 Extend the test suite to include more protocols. The telnet could just do ftp
571 or http operations (for which we have test servers).
573 17.4 more platforms supported
575 Make the test suite work on more platforms. OpenBSD and Mac OS. Remove
576 fork()s and it should become even more portable.
580 18.1 http-style HEAD output for ftp
582 #undef CURL_FTP_HTTPSTYLE_HEAD in lib/ftp.c to remove the HTTP-style headers
583 from being output in NOBODY requests over ftp
585 18.2 combine error codes
587 Combine some of the error codes to remove duplicates. The original
588 numbering should not be changed, and the old identifiers would be
589 macroed to the new ones in an CURL_NO_OLDIES section to help with
590 backward compatibility.
592 Candidates for removal and their replacements:
594 CURLE_FILE_COULDNT_READ_FILE => CURLE_REMOTE_FILE_NOT_FOUND
596 CURLE_FTP_COULDNT_RETR_FILE => CURLE_REMOTE_FILE_NOT_FOUND
598 CURLE_FTP_COULDNT_USE_REST => CURLE_RANGE_ERROR
600 CURLE_FUNCTION_NOT_FOUND => CURLE_FAILED_INIT
602 CURLE_LDAP_INVALID_URL => CURLE_URL_MALFORMAT
604 CURLE_TFTP_NOSUCHUSER => CURLE_TFTP_ILLEGAL
606 CURLE_TFTP_NOTFOUND => CURLE_REMOTE_FILE_NOT_FOUND
608 CURLE_TFTP_PERM => CURLE_REMOTE_ACCESS_DENIED
610 18.3 extend CURLOPT_SOCKOPTFUNCTION prototype
612 The current prototype only provides 'purpose' that tells what the
613 connection/socket is for, but not any protocol or similar. It makes it hard
614 for applications to differentiate on TCP vs UDP and even HTTP vs FTP and
617 10. Next major release
619 19.1 cleanup return codes
621 curl_easy_cleanup() returns void, but curl_multi_cleanup() returns a
622 CURLMcode. These should be changed to be the same.
624 19.2 remove obsolete defines
626 remove obsolete defines from curl/curl.h
630 make several functions use size_t instead of int in their APIs
632 19.4 remove several functions
634 remove the following functions from the public API:
638 curl_mprintf (and variations)
644 They will instead become curlx_ - alternatives. That makes the curl app
645 still capable of using them, by building with them from source.
647 These functions have no purpose anymore:
651 curl_multi_socket_all
653 19.5 remove CURLOPT_FAILONERROR
655 Remove support for CURLOPT_FAILONERROR, it has gotten too kludgy and weird
656 internally. Let the app judge success or not for itself.
658 19.6 remove CURLOPT_DNS_USE_GLOBAL_CACHE
660 Remove support for a global DNS cache. Anything global is silly, and we
661 already offer the share interface for the same functionality but done
664 19.7 remove progress meter from libcurl
666 The internally provided progress meter output doesn't belong in the library.
667 Basically no application wants it (apart from curl) but instead applications
668 can and should do their own progress meters using the progress callback.
670 The progress callback should then be bumped as well to get proper 64bit
671 variable types passed to it instead of doubles so that big files work
674 19.8 remove 'curl_httppost' from public
676 curl_formadd() was made to fill in a public struct, but the fact that the
677 struct is public is never really used by application for their own advantage
678 but instead often restricts how the form functions can or can't be modified.
680 Changing them to return a private handle will benefit the implementation and
681 allow us much greater freedoms while still maintining a solid API and ABI.
683 19.9 have form functions use CURL handle argument
685 curl_formadd() and curl_formget() both currently have no CURL handle
686 argument, but both can use a callback that is set in the easy handle, and
687 thus curl_formget() with callback cannot function without first having
688 curl_easy_perform() (or similar) called - which is hard to grasp and a design
691 19.10 Add CURLOPT_MAIL_CLIENT option
693 Rather than use the URL to specify the mail client string to present in the
694 HELO and EHLO commands, libcurl should support a new CURLOPT specifically for
695 specifing this data as the URL is non-standard and to be honest a bit of a
698 Please see the following thread for more information:
699 http://curl.haxx.se/mail/lib-2012-05/0178.html