--- /dev/null
+ _ _ ____ _
+ ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+BUGS
+
+ Curl has grown substantially from that day, several years ago, when I
+ started fiddling with it. When I write this, there are 16500 lines of source
+ code, and by the time you read this it has probably grown even more.
+
+ Of course there are lots of bugs left. And lots of misfeatures.
+
+ To help us make curl the stable and solid product we want it to be, we need
+ bug reports and bug fixes. If you can't fix a bug yourself and submit a fix
+ for it, try to report an as detailed report as possible to the curl mailing
+ list to allow one of us to have a go at a solution. You should also post
+ your bug/problem at curl's bug tracking system over at
+
+ http://sourceforge.net/bugs/?group_id=976
+
+ When reporting a bug, you should include information that will help us
+ understand what's wrong, what's expected and how to repeat it. You therefore
+ need to supply your operating system's name and version number (uname -a
+ under a unix is fine), what version of curl you're using (curl -v is fine),
+ what URL you were working with and anything else you think matters.
+
+ If curl crashed, causing a core dump (in unix), there is hardly any use to
+ send that huge file to anyone of us. Unless we have an exact same system
+ setup as you, we can't do much with it. What we instead ask of you is to get
+ a stack trace and send that (much smaller) output to us instead!
+
+ The address and how to subscribe to the mailing list is detailed in the
+ README.curl file.
+
+ HOW TO GET A STACK TRACE with a common unix debugger
+ ====================================================
+
+ First, you must make sure that you compile all sources with -g and that you
+ don't 'strip' the final executable.
+
+ Run the program until it bangs.
+
+ Run your debugger on the core file, like '<debugger> curl core'. <debugger>
+ should be replaced with the name of your debugger, in most cases that will
+ be 'gdb', but 'dbx' and others also occur.
+
+ When the debugger has finished loading the core file and presents you a
+ prompt, you can give the compiler instructions. Enter 'where' (without the
+ quotes) and press return.
+
+ The list that is presented is the stack trace. If everything worked, it is
+ supposed to contain the chain of functions that were called when curl
+ crashed.
+
--- /dev/null
+ _ _ ____ _
+ ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+CONTRIBUTE
+
+To Think About When Contributing Source Code
+
+ This document is intended to offer some guidelines that can be useful to
+ keep in mind when you decide to write a contribution to the project. This
+ concerns new features as well as corrections to existing flaws or bugs.
+
+Naming
+
+ Try using a non-confusing naming scheme for your new functions and variable
+ names. It doesn't necessarily have to mean that you should use the same as
+ in other places of the code, just that the names should be logical,
+ understandable and be named according to what they're used for.
+
+Indenting
+
+ Please try using the same indenting levels and bracing method as all the
+ other code already does. It makes the source code a lot easier to follow if
+ all of it is written using the same style. I don't ask you to like it, I
+ just ask you to follow the tradition! ;-)
+
+Commenting
+
+ Comment your source code extensively. I don't see myself as a very good
+ source commenter, but I try to become one. Commented code is quality code
+ and enables future modifications much more. Uncommented code much more risk
+ being completely replaced when someone wants to extend things, since other
+ persons' source code can get quite hard to read.
+
+General Style
+
+ Keep your functions small. If they're small you avoid a lot of mistakes and
+ you don't accidentally mix up variables.
+
+Non-clobbering All Over
+
+ When you write new functionality or fix bugs, it is important that you
+ don't fiddle all over the source files and functions. Remember that it is
+ likely that other people have done changes in the same source files as you
+ have and possibly even in the same functions. If you bring completely new
+ functionality, try writing it in a new source file. If you fix bugs, try to
+ fix one bug at a time and send them as separate patches.
+
+Separate Patches Doing Different Things
+
+ It is annoying when you get a huge patch from someone that is said to fix 511
+ odd problems, but discussions and opinions don't agree with 510 of them - or
+ 509 of them were already fixed in a different way. Then the patcher needs to
+ extract the single interesting patch from somewhere within the huge pile of
+ source, and that gives a lot of extra work. Preferably, all fixes that
+ correct different problems should be in their own patch with an attached
+ description exactly what they correct so that all patches can be selectively
+ applied by the maintainer or other interested parties.
+
+Document
+
+ Writing docs is dead boring and one of the big problems with many open
+ source projects. Someone's gotta do it. It makes it a lot easier if you
+ submit a small description of your fix or your new features with every
+ contribution so that it can be swiftly added to the package documentation.
+
+Write Access to CVS Repository
+
+ If you are a frequent contributor, or have another good reason, you can of
+ course get write access to the CVS repository and then you'll be able to
+ check-in all your changes straight into the CVS tree instead of sending all
+ changes by mail as patches. Just ask if this is what you'd want.
--- /dev/null
+ _ _ ____ _
+ ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+FAQ
+
+Problems connecting to SSL servers.
+===================================
+
+ It took a very long time before I could sort out why curl had problems
+ to connect to certain SSL servers when using SSLeay or OpenSSL v0.9+.
+ The error sometimes showed up similar to:
+
+ 16570:error:1407D071:SSL routines:SSL2_READ:bad mac decode:s2_pkt.c:233:
+
+ It turned out to be because many older SSL servers don't deal with SSLv3
+ requests properly. To correct this problem, tell curl to select SSLv2 from
+ the command line (-2/--sslv2).
+
+ I have also seen examples where the remote server didn't like the SSLv2
+ request and instead you had to force curl to use SSLv3 with -3/--sslv3.
+
+Does curl support resume?
+=========================
+
+ Yes. Both ways on FTP, download ways on HTTP.
+
+Is libcurl thread safe?
+=======================
+
+ Yes, as far as curl's own code goes. It does use system calls that often
+ aren't thread safe in most environments, such as gethostbyname().
+
+ I am very interested in once and for all getting some kind of report or
+ README file from those who have used libcurl in a threaded environment,
+ since I haven't and I get this question more and more frequently!
+
+Why doesn't my posting using -F work?
+=====================================
+
+ You can't simply use -F or -d at your choice. The web server that will
+ receive your post assumes one of the formats. If the form you're trying to
+ "fake" sets the type to 'multipart/form-data', than and only then you must
+ use the -F type. In all the most common cases, you should use -d which then
+ causes a posting with the type 'application/x-www-form-urlencoded'.
+
+Does curl support custom FTP commands?
+======================================
+
+ Yes it does, you can tell curl to perform optional commands both before
+ and/or after a file transfer. Study the -Q/--quote option.
+
+ Since curl is used for file transfers, you don't use curl to just perform
+ ftp commands without transfering anything. Therefore you must always specify
+ a URL to transfer to/from even when doing custom FTP commands.
+
+Does curl work with other SSL libraries?
+========================================
+
+ Curl has been written to use OpenSSL, although I doubt there would be much
+ problems using a different library. I just don't know any other free one and
+ that has limited my possibilities to develop against anything else.
+
+ If anyone does "port" curl to use a commercial SSL library, I am of course
+ very interested in getting the patch!
+
+configre doesn't find OpenSSL even when it is installed
+=======================================================
+
+ Platforms: Solaris (native cc compiler) and HPUX (native cc compiler)
+
+ When configuring curl, I specify --with-ssl. OpenSSL is installed in
+ /usr/local/ssl Configure reports SSL in /usr/local/ssl, but fails to find
+ CRYPTO_lock in -lcrypto
+
+ Cause: The cc for this test places the -L/usr/local/ssl/lib AFTER -lcrypto,
+ so ld can't find the library. This is due to a bug in the GNU autoconf tool.
+
+ Workaround: Specifying "LDFLAGS=-L/usr/local/ssl/lib" in front of ./configure
+ places the -L/usr/local/ssl/lib early enough in the command line to make
+ things work
+
+ Submitted by: Bob Allison <allisonb@users.sourceforge.net>
--- /dev/null
+ _ _ ____ _
+ ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+FEATURES
+
+Misc
+ - full URL syntax
+ - custom maximum download time
+ - custom least download speed acceptable
+ - custom output result after completion
+ - multiple URLs
+ - guesses protocol from host name unless specified
+ - uses .netrc
+ - progress bar/time specs while downloading
+ - PROXY environment variables support
+ - config file support
+ - compiles on win32
+
+HTTP
+ - GET
+ - PUT
+ - HEAD
+ - POST
+ - multipart POST
+ - authentication
+ - resume
+ - follow redirects
+ - custom HTTP request
+ - cookie get/send
+ - understands the netscape cookie file
+ - custom headers (that can replace internally generated headers)
+ - custom user-agent string
+ - custom referer string
+ - range
+ - proxy authentication
+ - time conditions
+ - via http-proxy
+
+HTTPS (*1)
+ - (all the HTTP features)
+ - using certificates
+ - via http-proxy
+
+FTP
+ - download
+ - authentication
+ - PORT or PASV
+ - single file size information (compare to HTTP HEAD)
+ - 'type=' URL support
+ - dir listing
+ - dir listing names-only
+ - upload
+ - upload append
+ - upload via http-proxy as HTTP PUT
+ - download resume
+ - upload resume
+ - QUOT commands (before and/or after the transfer)
+ - simple "range" support
+ - via http-proxy
+
+TELNET
+ - connection negotiation
+ - stdin/stdout I/O
+
+LDAP (*2)
+ - full LDAP URL support
+
+DICT
+ - extended DICT URL support
+
+GOPHER
+ - GET
+ - via http-proxy
+
+FILE
+ - URL support
+
+ *1 = requires OpenSSL
+ *2 = requires OpenLDAP
--- /dev/null
+ _ _ ____ _
+ ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+ How To Compile
+
+Curl has been compiled and built on numerous different operating systems. The
+way to proceed is mainly divided in two different ways: the unix way or the
+windows way.
+
+If you're using Windows (95, 98, NT) or OS/2, you should continue reading from
+the Win32 header below. All other systems should be capable of being installed
+as described in the the UNIX header.
+
+PORTS
+=====
+ Just to show off, this is a probably incomplete list of known hardware and
+ operating systems that curl has been compiled for:
+
+ - Ultrix
+ - SINIX-Z v5
+ Alpha DEC OSF 4
+ HP-PA HP-UX 10.X 11.X
+ MIPS IRIX 6.2, 6.5
+ Power AIX 4.2, 4.3.1
+ PowerPC Darwin 1.0
+ PowerPC Mac OS X
+ Sparc Solaris 2.4, 2.5, 2.5.1, 2.6, 7
+ Sparc SunOS 4.1.*
+ i386 BeOS
+ i386 FreeBSD
+ i386 Linux 1.3, 2.0, 2.2
+ i386 NetBSD
+ i386 OS/2
+ i386 OpenBSD
+ i386 Solaris 2.7
+ i386 Windows 95, 98, NT
+ m68k AmigaOS 3
+ m68k OpenBSD
+
+UNIX
+====
+
+ The configure script *always* tries to find a working SSL library unless
+ explicitly told not to. If you have OpenSSL installed in the default
+ search path for your compiler/linker, you don't need to do anything
+ special.
+
+ If you have OpenSSL installed in /usr/local/ssl, you can run configure
+ like:
+
+ ./configure --with-ssl
+
+ If you have OpenSSL installed somewhere else (for example, /opt/OpenSSL,)
+ you can run configure like this:
+
+ ./configure --with-ssl=/opt/OpenSSL
+
+ If you insist on forcing a build *without* SSL support, even though you may
+ have it installed in your system, you can run configure like this:
+
+ ./configure --without-ssl
+
+ If you have OpenSSL installed, but with the libraries in one place and the
+ header files somewhere else, you'll have to set the LDFLAGS and CPPFLAGS
+ environment variables prior to running configure. Something like this
+ should work:
+
+ (with the Bourne shell and its clones):
+
+ CPPFLAGS="-I/path/to/ssl/include" LDFLAGS="-L/path/to/ssl/lib" \
+ ./configure
+
+ (with csh, tcsh and their clones):
+
+ env CPPFLAGS="-I/path/to/ssl/include" LDFLAGS="-L/path/to/ssl/lib" \
+ ./configure
+
+ If your SSL library was compiled with rsaref (usually for use in
+ the United States), you may also need to set:
+
+ LIBS=-lRSAglue -lrsaref
+ (from Doug Kaufman <dkaufman@rahul.net>)
+
+ Without SSL support, just run:
+
+ ./configure
+
+ Then run:
+
+ make
+
+ Use the executable `curl` in src/ directory.
+
+ 'make install' copies the curl file to /usr/local/bin/ (or $prefix/bin
+ if you used the --prefix option to configure) and copies the curl.1
+ man page to a suitable place too.
+
+ KNOWN PROBLEMS
+
+ If you happen to have autoconf installed, but a version older than
+ 2.12 you will get into trouble. Then you can still build curl by
+ issuing these commands: (from Ralph Beckmann <rabe@uni-paderborn.de>)
+
+ ./configure [...]
+ cd lib; make; cd ..
+ cd src; make; cd ..
+ cp src/curl elsewhere/bin/
+
+ OPTIONS
+
+ Remember, to force configure to use the standard cc compiler if both
+ cc and gcc are present, run configure like
+
+ CC=cc ./configure
+ or
+ env Cc=cc ./configure
+
+
+Win32
+=====
+
+ Without SSL:
+
+ MingW32 (GCC-2.95) style
+ ------------------------
+ Run the 'mingw32.bat' file to get the proper environment variables
+ set, then run 'make -f Makefile.m32' in the lib/ dir and then
+ 'make -f Makefile.m32' in the src/ dir.
+
+ If you have any problems linking libraries or finding header files,
+ be sure to look at the provided "Makefile.m32" files for the proper
+ paths, and adjust as necessary.
+
+ Cygwin style
+ ------------
+ Almost identical to the unix installation. Run the configure script
+ in the curl root with 'sh configure'. Make sure you have the sh
+ executable in /bin/ or you'll see the configure fail towards the
+ end.
+
+ Run 'make'
+
+ Microsoft command line style
+ ----------------------------
+ Run the 'vcvars32.bat' file to get the proper environment variables
+ set, then run 'nmake -f Makefile.vc6' in the lib/ dir and then
+ 'nmake -f Makefile.vc6' in the src/ dir.
+
+ IDE-style
+ -------------------------
+ If you use VC++, Borland or similar compilers. Include all lib source
+ files in a static lib "project" (all .c and .h files that is).
+ (you should name it libcurl or similar)
+
+ Make the sources in the src/ drawer be a "win32 console application"
+ project. Name it curl.
+
+ With VC++, add 'wsock32.lib' to the link libs when you build curl!
+ Borland seems to do that itself magically. Of course you have to
+ make sure it links with the libcurl too!
+
+ For VC++ 6, there's an included Makefile.vc6 that should be possible
+ to use out-of-the-box.
+
+ Microsoft note: add /Zm200 to the compiler options, as the hugehelp.c
+ won't compile otherwise due to "too long puts string" or something
+ like that!
+
+
+ With SSL:
+
+ MingW32 (GCC-2.95) style
+ ------------------------
+ Run the 'mingw32.bat' file to get the proper environment variables
+ set, then run 'make -f Makefile.m32 SSL=1' in the lib/ dir and then
+ 'make -f Makefile.m32 SSL=1' in the src/ dir.
+
+ If you have any problems linking libraries or finding header files,
+ be sure to look at the provided "Makefile.m32" files for the proper
+ paths, and adjust as necessary.
+
+ Cygwin style
+ ------------
+
+ Haven't done, nor got any reports on how to do. It should although be
+ identical to the unix setup for the same purpose. See above.
+
+ Microsoft command line style
+ ----------------------------
+ Run the 'vcvars32.bat' file to get the proper environment variables
+ set, then run 'nmake -f Makefile.vc6 release-ssl' in the lib/ dir and
+ then 'nmake -f Makefile.vc6' in the src/ dir.
+
+ Microsoft / Borland style
+ -------------------------
+ If you have OpenSSL, and want curl to take advantage of it, edit your
+ project properties to use the SSL include path, link with the SSL libs
+ and define the USE_SSLEAY symbol.
+
+
+IBM OS/2
+========
+
+ Building under OS/2 is not much different from building under unix.
+ You need:
+
+ - emx 0.9d
+ - GNU make
+ - GNU patch
+ - ksh
+ - GNU bison
+ - GNU file utilities
+ - GNU sed
+ - autoconf 2.13
+
+ If you want to build with OpenSSL or OpenLDAP support, you'll need to
+ download those libraries, too. Dirk Ohme has done some work to port SSL
+ libraries under OS/2, but it looks like he doesn't care about emx. You'll
+ find his patches on: http://come.to/Dirk.Ohme
+
+ If during the linking you get an error about _errno being an undefined
+ symbol referenced from the text segment, you need to add -D__ST_MT_ERRNO__
+ in your definitions.
+
+ If everything seems to work fine but there's no curl.exe, you need to add
+ -Zexe to your linker flags.
+
+ If you're getting huge binaries, probably your makefiles have the -g in
+ CFLAGS.
+
+OpenSSL
+=======
+
+ You'll find OpenSSL information at:
+
+ http://www.openssl.org
+
+
+MingW32/Cygwin
+==============
+
+ You'll find MingW32 and Cygwin information at:
+
+ http://www.xraylith.wisc.edu/~khan/software/gnu-win32/index.html
+
+OpenLDAP
+========
+
+ You'll find OpenLDAP information at:
+
+ http://www.openldap.org
+
+ You need to install it with shared libraries, which is enabled when running
+ the ldap configure script with "--enable-shared". With my linux 2.0.36
+ kernel I also had to disable using threads (with --without-threads),
+ because the configure script couldn't figure out my system.
--- /dev/null
+ _ _ ____ _
+ ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+INTERNALS
+
+ The project is kind of split in two. The library and the client. The client
+ part uses the library, but the library is meant to be designed to allow other
+ applications to use it.
+
+ Thus, the largest amount of code and complexity is in the library part.
+
+Windows vs Unix
+===============
+
+ There are a few differences in how to program curl the unix way compared to
+ the Windows way. The four most notable details are:
+
+ 1. Different function names for close(), read(), write()
+ 2. Windows requires a couple of init calls
+ 3. The file descriptors for network communication and file operations are
+ not easily interchangable as in unix
+ 4. When writing data to stdout, Windows makes end-of-lines the DOS way, thus
+ destroying binary data, although you do want that conversion if it is
+ text coming through... (sigh)
+
+ In curl, (1) and (2) are done with defines and macros, so that the source
+ looks the same at all places except for the header file that defines them.
+
+ (3) is simply avoided by not trying any funny tricks on file descriptors.
+
+ (4) is left alone, giving windows users problems when they pipe binary data
+ through stdout...
+
+ Inside the source code, I do make an effort to avoid '#ifdef WIN32'. All
+ conditionals that deal with features *should* instead be in the format
+ '#ifdef HAVE_THAT_WEIRD_FUNCTION'. Since Windows can't run configure scripts,
+ I maintain two config-win32.h files (one in / and one in src/) that are
+ supposed to look exactly as a config.h file would have looked like on a
+ Windows machine!
+
+Library
+=======
+
+ There is a few entry points to the library, namely each publicly defined
+ function that libcurl offers to applications. All of those functions are
+ rather small and easy-to-follow, accept the one single and do-it-all named
+ curl_urlget() (entry point in lib/url.c).
+
+ curl_urlget() takes a variable amount of arguments, and they must all be
+ passed in pairs, the parameter-ID and the parameter-value. The list of
+ arguments must be ended with a end-of-arguments parameter-ID.
+
+ The function then continues to analyze the URL, get the different components
+ and connects to the remote host. This may involve using a proxy and/or using
+ SSL. The GetHost() function in lib/hostip.c is used for looking up host
+ names.
+
+ When connected, the proper function is called. The functions are named after
+ the protocols they handle. ftp(), http(), dict(), etc. They all reside in
+ their respective files (ftp.c, http.c and dict.c).
+
+ The protocol-specific functions deal with protocol-specific negotiations and
+ setup. They have access to the sendf() (from lib/sendf.c) function to send
+ printf-style formatted data to the remote host and when they're ready to make
+ the actual file transfer they call the Transfer() function (in
+ lib/download.c) to do the transfer. All printf()-style functions use the
+ supplied clones in lib/mprintf.c.
+
+ While transfering, the progress functions in lib/progress.c are called at a
+ frequent interval. The speedcheck functions in lib/speedcheck.c are also used
+ to verify that the transfer is as fast as required.
+
+ When the operation is done, the writeout() function in lib/writeout.c may be
+ called to report about the operation as specified previously in the arguments
+ to curl_urlget().
+
+ HTTP(S)
+
+ HTTP offers a lot and is the protocol in curl that uses the most lines of
+ code. There is a special file (lib/formdata.c) that offers all the multipart
+ post functions.
+
+ base64-functions for user+password stuff is in (lib/base64.c) and all
+ functions for parsing and sending cookies are found in
+ (lib/cookie.c).
+
+ HTTPS uses in almost every means the same procedure as HTTP, with only two
+ exceptions: the connect procedure is different and the function used
+
+ FTP
+
+ The if2ip() function can be used for getting the IP number of a specified
+ network interface, and it resides in lib/if2ip.c
+
+ TELNET
+
+ Telnet is implemented in lib/telnet.c.
+
+ FILE
+
+ The file:// protocol is dealt with in lib/file.c.
+
+ LDAP
+
+ Everything LDAP is in lib/ldap.c.
+
+ GENERAL
+
+ URL encoding and decoding, called escaping and unescaping in the source code,
+ is found in lib/escape.c.
+
+ While transfering data in Transfer() a few functions might get
+ used. get_date() in lib/getdate.c is for HTTP date comparisons.
+
+ lib/getenv.c is for reading environment variables in a neat platform
+ independent way. That's used in the client, but also in lib/url.c when
+ checking the PROXY variables.
+
+ lib/netrc.c keeps the .netrc parser
+
+ lib/timeval.c features replacement functions for systems that don't have
+
+ A function named curl_version() that returns the full curl version string is
+ found in lib/version.c.
+
+Client
+======
+
+ main() resides in src/main.c together with most of the client
+ code. src/hugehelp.c is automatically generated by the mkhelp.pl perl script
+ to display the complete "manual" and the src/urlglob.c file holds the
+ functions used for the multiple-URL support.
+
+ The client mostly mess around to setup its config struct properly, then it
+ calls the curl_urlget() function in the library and when it gets back control
+ it checks status and exits.
+
--- /dev/null
+LATEST VERSION
+
+ You always find news about what's going on as well as the latest versions
+ from the curl web pages, located at:
+
+ http://curl.haxx.nu
+
+SIMPLE USAGE
+
+ Get the main page from netscape's web-server:
+
+ curl http://www.netscape.com/
+
+ Get the root README file from funet's ftp-server:
+
+ curl ftp://ftp.funet.fi/README
+
+ Get a gopher document from funet's gopher server:
+
+ curl gopher://gopher.funet.fi
+
+ Get a web page from a server using port 8000:
+
+ curl http://www.weirdserver.com:8000/
+
+ Get a list of the root directory of an FTP site:
+
+ curl ftp://ftp.fts.frontec.se/
+
+ Get the definition of curl from a dictionary:
+
+ curl dict://dict.org/m:curl
+
+DOWNLOAD TO A FILE
+
+ Get a web page and store in a local file:
+
+ curl -o thatpage.html http://www.netscape.com/
+
+ Get a web page and store in a local file, make the local file get the name
+ of the remote document (if no file name part is specified in the URL, this
+ will fail):
+
+ curl -O http://www.netscape.com/index.html
+
+USING PASSWORDS
+
+ FTP
+
+ To ftp files using name+passwd, include them in the URL like:
+
+ curl ftp://name:passwd@machine.domain:port/full/path/to/file
+
+ or specify them with the -u flag like
+
+ curl -u name:passwd ftp://machine.domain:port/full/path/to/file
+
+ HTTP
+
+ The HTTP URL doesn't support user and password in the URL string. Curl
+ does support that anyway to provide a ftp-style interface and thus you can
+ pick a file like:
+
+ curl http://name:passwd@machine.domain/full/path/to/file
+
+ or specify user and password separately like in
+
+ curl -u name:passwd http://machine.domain/full/path/to/file
+
+ NOTE! Since HTTP URLs don't support user and password, you can't use that
+ style when using Curl via a proxy. You _must_ use the -u style fetch
+ during such circumstances.
+
+ HTTPS
+
+ Probably most commonly used with private certificates, as explained below.
+
+ GOPHER
+
+ Curl features no password support for gopher.
+
+PROXY
+
+ Get an ftp file using a proxy named my-proxy that uses port 888:
+
+ curl -x my-proxy:888 ftp://ftp.leachsite.com/README
+
+ Get a file from a HTTP server that requires user and password, using the
+ same proxy as above:
+
+ curl -u user:passwd -x my-proxy:888 http://www.get.this/
+
+ Some proxies require special authentication. Specify by using -U as above:
+
+ curl -U user:passwd -x my-proxy:888 http://www.get.this/
+
+ See also the environment variables Curl support that offer further proxy
+ control.
+
+RANGES
+
+ With HTTP 1.1 byte-ranges were introduced. Using this, a client can request
+ to get only one or more subparts of a specified document. Curl supports
+ this with the -r flag.
+
+ Get the first 100 bytes of a document:
+
+ curl -r 0-99 http://www.get.this/
+
+ Get the last 500 bytes of a document:
+
+ curl -r -500 http://www.get.this/
+
+ Curl also supports simple ranges for FTP files as well. Then you can only
+ specify start and stop position.
+
+ Get the first 100 bytes of a document using FTP:
+
+ curl -r 0-99 ftp://www.get.this/README
+
+UPLOADING
+
+ FTP
+
+ Upload all data on stdin to a specified ftp site:
+
+ curl -t ftp://ftp.upload.com/myfile
+
+ Upload data from a specified file, login with user and password:
+
+ curl -T uploadfile -u user:passwd ftp://ftp.upload.com/myfile
+
+ Upload a local file to the remote site, and use the local file name remote
+ too:
+
+ curl -T uploadfile -u user:passwd ftp://ftp.upload.com/
+
+ Upload a local file to get appended to the remote file using ftp:
+
+ curl -T localfile -a ftp://ftp.upload.com/remotefile
+
+ NOTE: Curl does not support ftp upload through a proxy! The reason for this
+ is simply that proxies are seldomly configured to allow this and that no
+ author has supplied code that makes it possible!
+
+ HTTP
+
+ Upload all data on stdin to a specified http site:
+
+ curl -t http://www.upload.com/myfile
+
+ Note that the http server must've been configured to accept PUT before this
+ can be done successfully.
+
+ For other ways to do http data upload, see the POST section below.
+
+VERBOSE / DEBUG
+
+ If curl fails where it isn't supposed to, if the servers don't let you
+ in, if you can't understand the responses: use the -v flag to get VERBOSE
+ fetching. Curl will output lots of info and all data it sends and
+ receives in order to let the user see all client-server interaction.
+
+ curl -v ftp://ftp.upload.com/
+
+DETAILED INFORMATION
+
+ Different protocols provide different ways of getting detailed information
+ about specific files/documents. To get curl to show detailed information
+ about a single file, you should use -I/--head option. It displays all
+ available info on a single file for HTTP and FTP. The HTTP information is a
+ lot more extensive.
+
+ For HTTP, you can get the header information (the same as -I would show)
+ shown before the data by using -i/--include. Curl understands the
+ -D/--dump-header option when getting files from both FTP and HTTP, and it
+ will then store the headers in the specified file.
+
+ Store the HTTP headers in a separate file:
+
+ curl --dump-header headers.txt curl.haxx.nu
+
+ Note that headers stored in a separate file can be very useful at a later
+ time if you want curl to use cookies sent by the server. More about that in
+ the cookies section.
+
+POST (HTTP)
+
+ It's easy to post data using curl. This is done using the -d <data>
+ option. The post data must be urlencoded.
+
+ Post a simple "name" and "phone" guestbook.
+
+ curl -d "name=Rafael%20Sagula&phone=3320780" \
+ http://www.where.com/guest.cgi
+
+ How to post a form with curl, lesson #1:
+
+ Dig out all the <input> tags in the form that you want to fill in. (There's
+ a perl program called formfind.pl on the curl site that helps with this).
+
+ If there's a "normal" post, you use -d to post. -d takes a full "post
+ string", which is in the format
+
+ <variable1>=<data1>&<variable2>=<data2>&...
+
+ The 'variable' names are the names set with "name=" in the <input> tags, and
+ the data is the contents you want to fill in for the inputs. The data *must*
+ be properly URL encoded. That means you replace space with + and that you
+ write weird letters with %XX where XX is the hexadecimal representation of
+ the letter's ASCII code.
+
+ Example:
+
+ (page located at http://www.formpost.com/getthis/
+
+ <form action="post.cgi" method="post">
+ <input name=user size=10>
+ <input name=pass type=password size=10>
+ <input name=id type=hidden value="blablabla">
+ <input name=ding value="submit">
+ </form>
+
+ We want to enter user 'foobar' with password '12345'.
+
+ To post to this, you enter a curl command line like:
+
+ curl -d "user=foobar&pass=12345&id=blablabla&dig=submit" (continues)
+ http://www.formpost.com/getthis/post.cgi
+
+
+ While -d uses the application/x-www-form-urlencoded mime-type, generally
+ understood by CGI's and similar, curl also supports the more capable
+ multipart/form-data type. This latter type supports things like file upload.
+
+ -F accepts parameters like -F "name=contents". If you want the contents to
+ be read from a file, use <@filename> as contents. When specifying a file,
+ you can also specify which content type the file is, by appending
+ ';type=<mime type>' to the file name. You can also post contents of several
+ files in one field. So that the field name 'coolfiles' can be sent three
+ files with different content types in a manner similar to:
+
+ curl -F "coolfiles=@fil1.gif;type=image/gif,fil2.txt,fil3.html" \
+ http://www.post.com/postit.cgi
+
+ If content-type is not specified, curl will try to guess from the extension
+ (it only knows a few), or use the previously specified type (from an earlier
+ file if several files are specified in a list) or finally using the default
+ type 'text/plain'.
+
+ Emulate a fill-in form with -F. Let's say you fill in three fields in a
+ form. One field is a file name which to post, one field is your name and one
+ field is a file description. We want to post the file we have written named
+ "cooltext.txt". To let curl do the posting of this data instead of your
+ favourite browser, you have to check out the HTML of the form page to get to
+ know the names of the input fields. In our example, the input field names are
+ 'file', 'yourname' and 'filedescription'.
+
+ curl -F "file=@cooltext.txt" -F "yourname=Daniel" \
+ -F "filedescription=Cool text file with cool text inside" \
+ http://www.post.com/postit.cgi
+
+ So, to send two files in one post you can do it in two ways:
+
+ 1. Send multiple files in a single "field" with a single field name:
+
+ curl -F "pictures=@dog.gif,cat.gif"
+
+ 2. Send two fields with two field names:
+
+ curl -F "docpicture=@dog.gif" -F "catpicture=@cat.gif"
+
+REFERER
+
+ A HTTP request has the option to include information about which address
+ that referred to actual page, and curl allows the user to specify that
+ referrer to get specified on the command line. It is especially useful to
+ fool or trick stupid servers or CGI scripts that rely on that information
+ being available or contain certain data.
+
+ curl -e www.coolsite.com http://www.showme.com/
+
+USER AGENT
+
+ A HTTP request has the option to include information about the browser
+ that generated the request. Curl allows it to be specified on the command
+ line. It is especially useful to fool or trick stupid servers or CGI
+ scripts that only accept certain browsers.
+
+ Example:
+
+ curl -A 'Mozilla/3.0 (Win95; I)' http://www.nationsbank.com/
+
+ Other common strings:
+ 'Mozilla/3.0 (Win95; I)' Netscape Version 3 for Windows 95
+ 'Mozilla/3.04 (Win95; U)' Netscape Version 3 for Windows 95
+ 'Mozilla/2.02 (OS/2; U)' Netscape Version 2 for OS/2
+ 'Mozilla/4.04 [en] (X11; U; AIX 4.2; Nav)' NS for AIX
+ 'Mozilla/4.05 [en] (X11; U; Linux 2.0.32 i586)' NS for Linux
+
+ Note that Internet Explorer tries hard to be compatible in every way:
+ 'Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)' MSIE for W95
+
+ Mozilla is not the only possible User-Agent name:
+ 'Konqueror/1.0' KDE File Manager desktop client
+ 'Lynx/2.7.1 libwww-FM/2.14' Lynx command line browser
+
+COOKIES
+
+ Cookies are generally used by web servers to keep state information at the
+ client's side. The server sets cookies by sending a response line in the
+ headers that looks like 'Set-Cookie: <data>' where the data part then
+ typically contains a set of NAME=VALUE pairs (separated by semicolons ';'
+ like "NAME1=VALUE1; NAME2=VALUE2;"). The server can also specify for what
+ path the "cookie" should be used for (by specifying "path=value"), when the
+ cookie should expire ("expire=DATE"), for what domain to use it
+ ("domain=NAME") and if it should be used on secure connections only
+ ("secure").
+
+ If you've received a page from a server that contains a header like:
+ Set-Cookie: sessionid=boo123; path="/foo";
+
+ it means the server wants that first pair passed on when we get anything in
+ a path beginning with "/foo".
+
+ Example, get a page that wants my name passed in a cookie:
+
+ curl -b "name=Daniel" www.sillypage.com
+
+ Curl also has the ability to use previously received cookies in following
+ sessions. If you get cookies from a server and store them in a file in a
+ manner similar to:
+
+ curl --dump-header headers www.example.com
+
+ ... you can then in a second connect to that (or another) site, use the
+ cookies from the 'headers' file like:
+
+ curl -b headers www.example.com
+
+ Note that by specifying -b you enable the "cookie awareness" and with -L
+ you can make curl follow a location: (which often is used in combination
+ with cookies). So that if a site sends cookies and a location, you can
+ use a non-existing file to trig the cookie awareness like:
+
+ curl -L -b empty-file www.example.com
+
+ The file to read cookies from must be formatted using plain HTTP headers OR
+ as netscape's cookie file. Curl will determine what kind it is based on the
+ file contents.
+
+PROGRESS METER
+
+ The progress meter exists to show a user that something actually is
+ happening. The different fields in the output have the following meaning:
+
+ % Total % Received % Xferd Average Speed Time Curr.
+ Dload Upload Total Current Left Speed
+ 0 151M 0 38608 0 0 9406 0 4:41:43 0:00:04 4:41:39 9287
+
+ From left-to-right:
+ % - percentage completed of the whole transfer
+ Total - total size of the whole expected transfer
+ % - percentage completed of the download
+ Received - currently downloaded amount of bytes
+ % - percentage completed of the upload
+ Xferd - currently uploaded amount of bytes
+ Average Speed
+ Dload - the average transfer speed of the download
+ Average Speed
+ Upload - the average transfer speed of the upload
+ Time Total - expected time to complete the operation
+ Time Current - time passed since the invoke
+ Time Left - expected time left to completetion
+ Curr.Speed - the average transfer speed the last 5 seconds (the first
+ 5 seconds of a transfer is based on less time of course.)
+
+ The -# option will display a totally different progress bar that doesn't
+ need much explanation!
+
+SPEED LIMIT
+
+ Curl offers the user to set conditions regarding transfer speed that must
+ be met to let the transfer keep going. By using the switch -y and -Y you
+ can make curl abort transfers if the transfer speed doesn't exceed your
+ given lowest limit for a specified time.
+
+ To let curl abandon downloading this page if its slower than 3000 bytes per
+ second for 1 minute, run:
+
+ curl -y 3000 -Y 60 www.far-away-site.com
+
+ This can very well be used in combination with the overall time limit, so
+ that the above operatioin must be completed in whole within 30 minutes:
+
+ curl -m 1800 -y 3000 -Y 60 www.far-away-site.com
+
+CONFIG FILE
+
+ Curl automatically tries to read the .curlrc file (or _curlrc file on win32
+ systems) from the user's home dir on startup. The config file should be
+ made up with normal command line switches. Comments can be used within the
+ file. If the first letter on a line is a '#'-letter the rest of the line
+ is treated as a comment.
+
+ Example, set default time out and proxy in a config file:
+
+ # We want a 30 minute timeout:
+ -m 1800
+ # ... and we use a proxy for all accesses:
+ -x proxy.our.domain.com:8080
+
+ White spaces ARE significant at the end of lines, but all white spaces
+ leading up to the first characters of each line are ignored.
+
+ Prevent curl from reading the default file by using -q as the first command
+ line parameter, like:
+
+ curl -q www.thatsite.com
+
+ Force curl to get and display a local help page in case it is invoked
+ without URL by making a config file similar to:
+
+ # default url to get
+ http://help.with.curl.com/curlhelp.html
+
+ You can specify another config file to be read by using the -K/--config
+ flag. If you set config file name to "-" it'll read the config from stdin,
+ which can be handy if you want to hide options from being visible in process
+ tables etc:
+
+ echo "-u user:passwd" | curl -K - http://that.secret.site.com
+
+EXTRA HEADERS
+
+ When using curl in your own very special programs, you may end up needing
+ to pass on your own custom headers when getting a web page. You can do
+ this by using the -H flag.
+
+ Example, send the header "X-you-and-me: yes" to the server when getting a
+ page:
+
+ curl -H "X-you-and-me: yes" www.love.com
+
+ This can also be useful in case you want curl to send a different text in
+ a header than it normally does. The -H header you specify then replaces the
+ header curl would normally send.
+
+FTP and PATH NAMES
+
+ Do note that when getting files with the ftp:// URL, the given path is
+ relative the directory you enter. To get the file 'README' from your home
+ directory at your ftp site, do:
+
+ curl ftp://user:passwd@my.site.com/README
+
+ But if you want the README file from the root directory of that very same
+ site, you need to specify the absolute file name:
+
+ curl ftp://user:passwd@my.site.com//README
+
+ (I.e with an extra slash in front of the file name.)
+
+FTP and firewalls
+
+ The FTP protocol requires one of the involved parties to open a second
+ connction as soon as data is about to get transfered. There are two ways to
+ do this.
+
+ The default way for curl is to issue the PASV command which causes the
+ server to open another port and await another connection performed by the
+ client. This is good if the client is behind a firewall that don't allow
+ incoming connections.
+
+ curl ftp.download.com
+
+ If the server for example, is behind a firewall that don't allow connections
+ on other ports than 21 (or if it just doesn't support the PASV command), the
+ other way to do it is to use the PORT command and instruct the server to
+ connect to the client on the given (as parameters to the PORT command) IP
+ number and port.
+
+ The -P flag to curl allows for different options. Your machine may have
+ several IP-addresses and/or network interfaces and curl allows you to select
+ which of them to use. Default address can also be used:
+
+ curl -P - ftp.download.com
+
+ Download with PORT but use the IP address of our 'le0' interface:
+
+ curl -P le0 ftp.download.com
+
+ Download with PORT but use 192.168.0.10 as our IP address to use:
+
+ curl -P 192.168.0.10 ftp.download.com
+
+HTTPS
+
+ Secure HTTP requires SSL libraries to be installed and used when curl is
+ built. If that is done, curl is capable of retrieving and posting documents
+ using the HTTPS procotol.
+
+ Example:
+
+ curl https://www.secure-site.com
+
+ Curl is also capable of using your personal certificates to get/post files
+ from sites that require valid certificates. The only drawback is that the
+ certificate needs to be in PEM-format. PEM is a standard and open format to
+ store certificates with, but it is not used by the most commonly used
+ browsers (Netscape and MSEI both use the so called PKCS#12 format). If you
+ want curl to use the certificates you use with your (favourite) browser, you
+ may need to download/compile a converter that can convert your browser's
+ formatted certificates to PEM formatted ones. This kind of converter is
+ included in recent versions of OpenSSL, and for older versions Dr Stephen
+ N. Henson has written a patch for SSLeay that adds this functionality. You
+ can get his patch (that requires an SSLeay installation) from his site at:
+ http://www.drh-consultancy.demon.co.uk/
+
+ Example on how to automatically retrieve a document using a certificate with
+ a personal password:
+
+ curl -E /path/to/cert.pem:password https://secure.site.com/
+
+ If you neglect to specify the password on the command line, you will be
+ prompted for the correct password before any data can be received.
+
+ Many older SSL-servers have problems with SSLv3 or TLS, that newer versions
+ of OpenSSL etc is using, therefore it is sometimes useful to specify what
+ SSL-version curl should use. Use -3 or -2 to specify that exact SSL version
+ to use:
+
+ curl -2 https://secure.site.com/
+
+ Otherwise, curl will first attempt to use v3 and then v2.
+
+RESUMING FILE TRANSFERS
+
+ To continue a file transfer where it was previously aborted, curl supports
+ resume on http(s) downloads as well as ftp uploads and downloads.
+
+ Continue downloading a document:
+
+ curl -c -o file ftp://ftp.server.com/path/file
+
+ Continue uploading a document(*1):
+
+ curl -c -T file ftp://ftp.server.com/path/file
+
+ Continue downloading a document from a web server(*2):
+
+ curl -c -o file http://www.server.com/
+
+ (*1) = This requires that the ftp server supports the non-standard command
+ SIZE. If it doesn't, curl will say so.
+
+ (*2) = This requires that the wb server supports at least HTTP/1.1. If it
+ doesn't, curl will say so.
+
+TIME CONDITIONS
+
+ HTTP allows a client to specify a time condition for the document it
+ requests. It is If-Modified-Since or If-Unmodified-Since. Curl allow you to
+ specify them with the -z/--time-cond flag.
+
+ For example, you can easily make a download that only gets performed if the
+ remote file is newer than a local copy. It would be made like:
+
+ curl -z local.html http://remote.server.com/remote.html
+
+ Or you can download a file only if the local file is newer than the remote
+ one. Do this by prepending the date string with a '-', as in:
+
+ curl -z -local.html http://remote.server.com/remote.html
+
+ You can specify a "free text" date as condition. Tell curl to only download
+ the file if it was updated since yesterday:
+
+ curl -z yesterday http://remote.server.com/remote.html
+
+ Curl will then accept a wide range of date formats. You always make the date
+ check the other way around by prepending it with a dash '-'.
+
+DICT
+
+ For fun try
+
+ curl dict://dict.org/m:curl
+ curl dict://dict.org/d:heisenbug:jargon
+ curl dict://dict.org/d:daniel:web1913
+
+ Aliases for 'm' are 'match' and 'find', and aliases for 'd' are 'define'
+ and 'lookup'. For example,
+
+ curl dict://dict.org/find:curl
+
+ Commands that break the URL description of the RFC (but not the DICT
+ protocol) are
+
+ curl dict://dict.org/show:db
+ curl dict://dict.org/show:strat
+
+ Authentication is still missing (but this is not required by the RFC)
+
+LDAP
+
+ If you have installed the OpenLDAP library, curl can take advantage of it
+ and offer ldap:// support.
+
+ LDAP is a complex thing and writing an LDAP query is not an easy task. I do
+ advice you to dig up the syntax description for that elsewhere, RFC 1959 if
+ no other place is better.
+
+ To show you an example, this is now I can get all people from my local LDAP
+ server that has a certain sub-domain in their email address:
+
+ curl -B "ldap://ldap.frontec.se/o=frontec??sub?mail=*sth.frontec.se"
+
+ If I want the same info in HTML format, I can get it by not using the -B
+ (enforce ASCII) flag.
+
+ENVIRONMENT VARIABLES
+
+ Curl reads and understands the following environment variables:
+
+ HTTP_PROXY, HTTPS_PROXY, FTP_PROXY, GOPHER_PROXY
+
+ They should be set for protocol-specific proxies. General proxy should be
+ set with
+
+ ALL_PROXY
+
+ A comma-separated list of host names that shouldn't go through any proxy is
+ set in (only an asterisk, '*' matches all hosts)
+
+ NO_PROXY
+
+ If a tail substring of the domain-path for a host matches one of these
+ strings, transactions with that node will not be proxied.
+
+
+ The usage of the -x/--proxy flag overrides the environment variables.
+
+NETRC
+
+ Unix introduced the .netrc concept a long time ago. It is a way for a user
+ to specify name and password for commonly visited ftp sites in a file so
+ that you don't have to type them in each time you visit those sites. You
+ realize this is a big security risk if someone else gets hold of your
+ passwords, so therefor most unix programs won't read this file unless it is
+ only readable by yourself (curl doesn't care though).
+
+ Curl supports .netrc files if told so (using the -n/--netrc option). This is
+ not restricted to only ftp, but curl can use it for all protocols where
+ authentication is used.
+
+ A very simple .netrc file could look something like:
+
+ machine curl.haxx.nu login iamdaniel password mysecret
+
+CUSTOM OUTPUT
+
+ To better allow script programmers to get to know about the progress of
+ curl, the -w/--write-out option was introduced. Using this, you can specify
+ what information from the previous transfer you want to extract.
+
+ To display the amount of bytes downloaded together with some text and an
+ ending newline:
+
+ curl -w 'We downloaded %{size_download} bytes\n' www.download.com
+
+MAILING LIST
+
+ We have an open mailing list to discuss curl, its development and things
+ relevant to this.
+
+ To subscribe, mail curl-request@contactor.se with "subscribe <your email
+ address>" in the body.
+
+ To post to the list, mail curl@contactor.se.
+
+ To unsubcribe, mail curl-request@contactor.se with "unsubscribe <your
+ subscribed email address>" in the body.
+
--- /dev/null
+ _ _ _ _
+ | (_) |__ ___ _ _ _ __| |
+ | | | '_ \ / __| | | | '__| |
+ | | | |_) | (__| |_| | | | |
+ |_|_|_.__/ \___|\__,_|_| |_|
+
+
+ How To Use Libcurl In Your Program:
+ (by Ralph Beckmann <rabe@uni-paderborn.de>)
+
+NOTE: If you plan to use libcurl.a in Threads under Linux, do not use the old
+gcc-2.7.x because the function 'gethostbyname' seems not to be thread-safe,
+that is to say an unavoidable SEGMENTATION FAULT might occur.
+
+
+1. a) In a C-Program:
+ #include "curl.h"
+
+ b) In a C++-Program:
+ extern "C" {
+ #include "curl.h"
+ }
+
+2. char *url="http://www.domain.com";
+ curl_urlget (URGTAG_URL, url,
+ URGTAG_FLAGS, CONF_NOPROGRESS,
+ URGTAG_ERRORBUFFER, errorBuffer,
+ URGTAG_WRITEFUNCTION, (size_t (*)(void *, int, int, FILE
+*))handle_data,
+ URGTAG_TIMEOUT, 30, /* or anything You want */
+ ...
+ URGTAG_DONE);
+
+3. size_t handle_data (const void *ptr, size_t size, size_t nitems,
+ FILE *stream)
+ {
+ (void)stream; /* stop complaining using g++ -Wall */
+ if ((int)nitems <= 0) {
+ return (size_t)0;
+ }
+ fprintf(stdout, (char *)ptr); /* or do anything else with it */
+ return nitems;
+ }
+
+4. Compile Your Program with -I$(CURL_DIR)/include
+
+5. Link Your Program together with $(CURL_DIR)/lib/libcurl.a
+
+ Small Example of How To Use libcurl
+
+----------------------------------------------------------------------
+/* Full example that uses libcurl.a to fetch web pages. */
+/* curlthreads.c */
+/* - Test-Program by Ralph Beckmann for using curl in POSIX-Threads */
+/* Change *url1 and *url2 to textual long and slow non-FRAMESET websites! */
+/*
+ 1. Compile with gcc or g++ as $(CC):
+ $(CC) -c -Wall -pedantic curlthreads.c -I$(CURL_DIR)/include
+
+ 2. Link with:
+ - Linux:
+ $(CC) -o curlthreads curlthreads.o $(CURL_DIR)/lib/libcurl.a -lpthread
+-lm
+ - Solaris:
+ $(CC) -o curlthreads curlthreads.o $(CURL_DIR)/lib/libcurl.a -lpthread
+-lm -lsocket -lnsl
+*/
+
+#include <pthread.h>
+#include <stdio.h>
+#ifdef __cplusplus
+extern "C" {
+#include "curl.h"
+}
+#else
+#include "curl.h"
+#endif
+
+size_t storedata (const void *ptr, size_t size, size_t nitems, FILE *stream) {
+ (void)ptr; (void)stream; /* just to stop g++ -Wall complaining */
+ fprintf(stdout, "Thread #%i reads %i Bytes.\n",
+ (int)pthread_self(), (int)(nitems*size));
+ return (nitems);
+}
+
+void *urlfetcher(void *url) {
+ curl_urlget (URGTAG_URL, url,
+ URGTAG_FLAGS, CONF_NOPROGRESS | CONF_FAILONERROR,
+ URGTAG_WRITEFUNCTION, (size_t (*)(void *, int, int, FILE
+*))storedata,
+ URGTAG_DONE);
+ return NULL;
+}
+
+int main(void) {
+ char *url1="www.sun.com";
+ char *url2="www.microsoft.com";
+
+ pthread_t thread_id1, thread_id2;
+ pthread_create(&thread_id1, NULL, urlfetcher, (void *)url1);
+ pthread_create(&thread_id2, NULL, urlfetcher, (void *)url2);
+ pthread_join(thread_id1, NULL);
+ pthread_join(thread_id2, NULL);
+
+ fprintf(stdout, "Ready.\n");
+
+ return 0;
+}
--- /dev/null
+ _ _ ____ _
+ Project ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+
+This document has been introduced in order to let you find documents that
+specify standards used by curl, software that extends curl and web pages with
+"competing" utilities.
+
+Standards
+
+ RFC 959 - Defines how FTP works
+
+ RFC 1738 - Uniform Resource Locators
+
+ RFC 1777 - defines the LDAP protocol
+
+ RFC 1808 - Relative Uniform Resource Locators
+
+ RFC 1867 - Form-based File Upload in HTML
+
+ RFC 1950 - ZLIB Compressed Data Format Specification
+
+ RFC 1951 - DEFLATE Compressed Data Format Specification
+
+ RFC 1952 - gzip compression format
+
+ RFC 1959 - LDAP URL syntax
+
+ RFC 2045-2049 - Everything you need to know about MIME! (needed for form
+ based upload)
+
+ RFC 2068 - HTTP 1.1 (obsoleted by RFC 2616)
+
+ RFC 2109 - HTTP State Management Mechanism (cookie stuff)
+ - Also, read Netscape's specification at
+ http://www.netscape.com/newsref/std/cookie_spec.html
+
+ RFC 2183 - "The Content-Disposition Header Field"
+
+ RFC 2229 - "A Dictionary Server Protocol"
+
+ RFC 2231 - "MIME Parameter Value and Encoded Word Extensions:
+ Character Sets, Languages, and Continuations"
+
+ RFC 2388 - "Returning Values from Forms: multipart/form-data"
+ Use this as an addition to the 1867
+
+ RFC 2396 - "Uniform Resource Identifiers: Generic Syntax and Semantics"
+ This one obsoletes 1738, but since 1738 is often mentioned I've left it
+ in this list.
+
+ RFC 2428 - "FTP Extensions for IPv6 and NATs"
+ This should be considered when introducing IPv6 awareness.
+
+ RFC 2616 - HTTP 1.1
+
+ RFC 2617 - HTTP Authentication
+
+Compilers
+
+ MingW32 - http://www.xraylith.wisc.edu/~khan/software/gnu-win32/index.html
+
+Software
+
+ OpenSSL - http://www.openssl.org
+ OpenLDAP - http://www.openldap.org
+ zlib - http://www.cdrom.com/pub/infozip/zlib/
+
+Competitors
+
+ wget - ftp://prep.ai.mit.edu/pub/gnu/
+ snarf - http://www.xach.com/snarf/
+ lynx - http://lynx.browser.org/ (well at least when -dump is used)
+ swebget - http://www.uni-hildesheim.de/~smol0075/swebget/
+ fetch - ?
+
--- /dev/null
+ _ _ ____ _
+ ___| | | | _ \| |
+ / __| | | | |_) | |
+ | (__| |_| | _ <| |___
+ \___|\___/|_| \_\_____|
+
+TODO
+
+ Ok, this is what I wanna do with Curl. Please tell me what you think, and
+ please don't hesitate to contribute and send me patches that improve this
+ product! (Yes, you may add things not mentioned here, these are just a
+ few teasers...)
+
+ * rtsp:// support -- "Real Time Streaming Protocol"
+
+ RFC 2326
+
+ * "Content-Encoding: compress/gzip/zlib"
+
+ HTTP 1.1 clearly defines how to get and decode compressed documents. There
+ is the zlib that is pretty good at decompressing stuff. This work was
+ started in October 1999 but halted again since it proved more work than we
+ thought. It is still a good idea to implement though.
+
+ * HTTP Pipelining/persistant connections
+
+ - We should introduce HTTP "pipelining". Curl could be able to request for
+ several HTTP documents in one connect. It would be the beginning for
+ supporing more advanced functions in the future, like web site
+ mirroring. This will require that the urlget() function supports several
+ documents from a single HTTP server, which it doesn't today.
+
+ - When curl supports fetching several documents from the same server using
+ pipelining, I'd like to offer that function to the command line. Anyone has
+ a good idea how? The current way of specifying one URL with the output sent
+ to the stdout or a file gets in the way. Imagine a syntax that supports
+ "additional documents from the same server" in a way similar to:
+
+ curl <main URL> --more-doc <path> --more-doc <path>
+
+ where --more-doc specifies another document on the same server. Where are
+ the output files gonna be put and how should they be named? Should each
+ "--more-doc" parameter require a local file name to store the result in?
+ Like "--more-file" as in:
+
+ curl <URL> --more-doc <path> --more-file <file>
+
+ * RFC2617 compliance, "Digest Access Authentication"
+ A valid test page seem to exist at:
+ http://hopf.math.nwu.edu/testpage/digest/
+ And some friendly person's server source code is available at
+ http://hopf.math.nwu.edu/digestauth/index.html
+
+ Then there's the Apache mod_digest source code too of course. It seems as
+ if Netscape doesn't support this, and not many servers do. Although this is
+ a lot better authentication method than the more common "Basic". Basic
+ sends the password in cleartext over the network, this "Digest" method uses
+ a challange-response protocol which increases security quite a lot.
+
+ * Different FTP Upload Through Web Proxy
+ I don't know any web proxies that allow CONNECT through on port 21, but
+ that would be the best way to do ftp upload. All we would need to do would
+ be to 'CONNECT <host>:<port> HTTP/1.0\r\n\r\n' and then do business as
+ usual. I least I think so. It would be fun if someone tried this...
+
+ * Multiple Proxies?
+ Is there anyone that actually uses serial-proxies? I mean, send CONNECT to
+ the first proxy to connect to the second proxy to which you send CONNECT to
+ connect to the remote host (or even more iterations). Is there anyone
+ wanting curl to support it? (Not that it would be hard, just confusing...)
+
+ * Other proxies
+ Ftp-kind proxy, Socks5, whatever kind of proxies are there?
+
+ * IPv6 Awareness
+ Where ever it would fit. I am not that into v6 yet to fully grasp what we
+ would need to do, but letting the autoconf search for v6-versions of a few
+ functions and then use them instead is of course the first thing to do...
+ RFC 2428 "FTP Extensions for IPv6 and NATs" will be interesting. PORT
+ should be replaced with EPRT for IPv6, and EPSV instead of PASV.
+
+ * An automatic RPM package maker
+ Please, write me a script that makes it. It'd make my day.
+
+ * SSL for more protocols, like SSL-FTP...
+ (http://search.ietf.org/internet-drafts/draft-murray-auth-ftp-ssl-05.txt)
+
+ * HTTP POST resume using Range:
+
+ * Make curl capable of verifying the server's certificate when connecting
+ with HTTPS://.
+
+ * Make the timeout work as expected!
--- /dev/null
+.\" You can view this file with:
+.\" nroff -man curl.1
+.\" Written by Daniel Stenberg
+.\"
+.TH curl 1 "13 March 2000" "Curl 6.5" "Curl Manual"
+.SH NAME
+curl \- get a URL with FTP, TELNET, LDAP, GOPHER, DICT, FILE, HTTP or
+HTTPS syntax.
+.SH SYNOPSIS
+.B curl [options]
+.I url
+.SH DESCRIPTION
+.B curl
+is a client to get documents/files from servers, using any of the
+supported protocols. The command is designed to work without user
+interaction or any kind of interactivity.
+
+curl offers a busload of useful tricks like proxy support, user
+authentication, ftp upload, HTTP post, SSL (https:) connections, cookies, file
+transfer resume and more.
+.SH URL
+The URL syntax is protocol dependent. You'll find a detailed description in
+RFC 2396.
+
+You can specify multiple URLs or parts of URLs by writing part sets within
+braces as in:
+
+ http://site.{one,two,three}.com
+
+or you can get sequences of alphanumeric series by using [] as in:
+
+ ftp://ftp.numericals.com/file[1-100].txt
+ ftp://ftp.numericals.com/file[001-100].txt (with leading zeros)
+ ftp://ftp.letters.com/file[a-z].txt
+
+It is possible to specify up to 9 sets or series for a URL, but no nesting is
+supported at the moment:
+
+ http://www.any.org/archive[1996-1999]/volume[1-4]part{a,b,c,index}.html
+.SH OPTIONS
+.IP "-a/--append"
+(FTP)
+When used in a ftp upload, this will tell curl to append to the target
+file instead of overwriting it. If the file doesn't exist, it will
+be created.
+.IP "-A/--user-agent <agent string>"
+(HTTP)
+Specify the User-Agent string to send to the HTTP server. Some badly done CGIs
+fail if its not set to "Mozilla/4.0". To encode blanks in the string,
+surround the string with single quote marks. This can also be set with the
+-H/--header flag of course.
+.IP "-b/--cookie <name=data>"
+(HTTP)
+Pass the data to the HTTP server as a cookie. It is supposedly the
+data previously received from the server in a "Set-Cookie:" line.
+The data should be in the format "NAME1=VALUE1; NAME2=VALUE2".
+
+If no '=' letter is used in the line, it is treated as a filename to use to
+read previously stored cookie lines from, which should be used in this session
+if they match. Using this method also activates the "cookie parser" which
+will make curl record incoming cookies too, which may be handy if you're using
+this in combination with the -L/--location option. The file format of the file
+to read cookies from should be plain HTTP headers or the netscape cookie file
+format.
+
+.B NOTE
+that the file specified with -b/--cookie is only used as input. No cookies
+will be stored in the file. To store cookies, save the HTTP headers to a file
+using -D/--dump-header!
+.IP "-B/--ftp-ascii"
+(FTP/LDAP)
+Use ASCII transfer when getting an FTP file or LDAP info. For FTP, this can
+also be enforced by using an URL that ends with ";type=A".
+.IP "-c/--continue"
+Continue/Resume a previous file transfer. This instructs curl to
+continue appending data on the file where it was previously left,
+possibly because of a broken connection to the server. There must be
+a named physical file to append to for this to work.
+Note: Upload resume is depening on a command named SIZE not always
+present in all ftp servers! Upload resume is for FTP only.
+HTTP resume is only possible with HTTP/1.1 or later servers.
+.IP "-C/--continue-at <offset>"
+Continue/Resume a previous file transfer at the given offset. The
+given offset is the exact number of bytes that will be skipped
+counted from the beginning of the source file before it is transfered
+to the destination.
+If used with uploads, the ftp server command SIZE will not be used by
+curl. Upload resume is for FTP only.
+HTTP resume is only possible with HTTP/1.1 or later servers.
+.IP "-d/--data <data>"
+(HTTP)
+Sends the specified data in a POST request to the HTTP server. Note
+that the data is sent exactly as specified with no extra processing.
+The data is expected to be "url-encoded". This will cause curl to
+pass the data to the server using the content-type
+application/x-www-form-urlencoded. Compare to -F.
+
+If you start the data with the letter @, the rest should be a file name to
+read the data from, or - if you want curl to read the data from stdin.
+The contents of the file must already be url-encoded.
+.IP "-D/--dump-header <file>"
+(HTTP/FTP)
+Write the HTTP headers to this file. Write the FTP file info to this
+file if -I/--head is used.
+
+This option is handy to use when you want to store the cookies that a HTTP
+site sends to you. The cookies could then be read in a second curl invoke by
+using the -b/--cookie option!
+.IP "-e/--referer <URL>"
+(HTTP)
+Sends the "Referer Page" information to the HTTP server. Some badly
+done CGIs fail if it's not set. This can also be set with the -H/--header
+flag of course.
+.IP "-E/--cert <certificate[:password]>"
+(HTTPS)
+Tells curl to use the specified certificate file when getting a file
+with HTTPS. The certificate must be in PEM format.
+If the optional password isn't specified, it will be queried for on
+the terminal. Note that this certificate is the private key and the private
+certificate concatenated!
+.IP "-f/--fail"
+(HTTP)
+Fail silently (no output at all) on server errors. This is mostly done
+like this to better enable scripts etc to better deal with failed
+attempts. In normal cases when a HTTP server fails to deliver a
+document, it returns a HTML document stating so (which often also
+describes why and more). This flag will prevent curl from
+outputting that and fail silently instead.
+.IP "-F/--form <name=content>"
+(HTTP)
+This lets curl emulate a filled in form in which a user has pressed
+the submit button. This causes curl to POST data using the
+content-type multipart/form-data according to RFC1867. This enables
+uploading of binary files etc. To force the 'content' part to be
+read from a file, prefix the file name with an @ sign. Example, to
+send your password file to the server, where 'password' is the
+name of the form-field to which /etc/passwd will be the input:
+
+.B curl
+-F password=@/etc/passwd www.mypasswords.com
+
+To read the file's content from stdin insted of a file, use - where the file
+name should've been.
+.IP "-h/--help"
+Usage help.
+.IP "-H/--header <header>"
+(HTTP)
+Extra header to use when getting a web page. You may specify any number of
+extra headers. Note that if you should add a custom header that has the same
+name as one of the internal ones curl would use, your externally set header
+will be used instead of the internal one. This allows you to make even
+trickier stuff than curl would normally do. You should not replace internally
+set headers without knowing perfectly well what you're doing.
+.IP "-i/--include"
+(HTTP)
+Include the HTTP-header in the output. The HTTP-header includes things
+like server-name, date of the document, HTTP-version and more...
+.IP "-I/--head"
+(HTTP/FTP)
+Fetch the HTTP-header only! HTTP-servers feature the command HEAD
+which this uses to get nothing but the header of a document. When used
+on a FTP file, curl displays the file size only.
+.IP "-K/--config <config file>"
+Specify which config file to read curl arguments from. The config
+file is a text file in which command line arguments can be written
+which then will be used as if they were written on the actual command
+line. If the first column of a config line is a '#' character, the
+rest of the line will be treated as a comment.
+
+Specify the filename as '-' to make curl read the file from stdin.
+.IP "-l/--list-only"
+(FTP)
+When listing an FTP directory, this switch forces a name-only view.
+Especially useful if you want to machine-parse the contents of an FTP
+directory since the normal directory view doesn't use a standard look
+or format.
+.IP "-L/--location"
+(HTTP/HTTPS)
+If the server reports that the requested page has a different location
+(indicated with the header line Location:) this flag will let curl
+attempt to reattempt the get on the new place. If used together with
+-i or -I, headers from all requested pages will be shown.
+.IP "-m/--max-time <seconds>"
+Maximum time in seconds that you allow the whole operation to take.
+This is useful for preventing your batch jobs from hanging for hours
+due to slow networks or links going down.
+This doesn't work properly in win32 systems.
+.IP "-M/--manual"
+Manual. Display the huge help text.
+.IP "-n/--netrc"
+Makes curl scan the
+.I .netrc
+file in the user's home directory for login name and password. This is
+typically used for ftp on unix. If used with http, curl will enable user
+authentication. See
+.BR netrc(5)
+for details on the file format. Curl will not complain if that file
+hasn't the right permissions (it should not be world nor group
+readable). The environment variable "HOME" is used to find the home
+directory.
+
+A quick and very simple example of how to setup a
+.I .netrc
+to allow curl to ftp to the machine host.domain.com with user name
+'myself' and password 'secret' should look similar to:
+
+.B "machine host.domain.com login myself password secret"
+.IP "-N/--no-buffer"
+Disables the buffering of the output stream. In normal work situations, curl
+will use a standard buffered output stream that will have the effect that it
+will output the data in chunks, not necessarily exactly when the data arrives.
+Using this option will disable that buffering.
+.IP "-o/--output <file>"
+Write output to <file> instead of stdout. If you are using {} or [] to fetch
+multiple documents, you can use '#' followed by a number in the <file>
+specifier. That variable will be replaced with the current string for the URL
+being fetched. Like in:
+
+ curl http://{one,two}.site.com -o "file_#1.txt"
+
+or use several variables like:
+
+ curl http://{site,host}.host[1-5].com -o "#1_#2"
+.IP "-O/--remote-name"
+Write output to a local file named like the remote file we get. (Only
+the file part of the remote file is used, the path is cut off.)
+.IP "-P/--ftpport <address>"
+(FTP)
+Reverses the initiator/listener roles when connecting with ftp. This
+switch makes Curl use the PORT command instead of PASV. In
+practice, PORT tells the server to connect to the client's specified
+address and port, while PASV asks the server for an ip address and
+port to connect to. <address> should be one of:
+.RS
+.TP 12
+.B interface
+i.e "eth0" to specify which interface's IP address you want to use (Unix only)
+.TP
+.B "IP address"
+i.e "192.168.10.1" to specify exact IP number
+.TP
+.B "host name"
+i.e "my.host.domain" to specify machine
+.TP
+.B "-"
+(any single-letter string) to make it pick the machine's default
+.RE
+.IP "-q"
+If used as the first parameter on the command line, the
+.I $HOME/.curlrc
+file will not be read and used as a config file.
+.IP "-Q/--quote <comand>"
+(FTP) Send an arbitrary command to the remote FTP server, by using the QUOTE
+command of the server. Not all servers support this command, and the set of
+QUOTE commands are server specific! Quote commands are sent BEFORE the
+transfer is taking place. To make commands take place after a successful
+transfer, prefix them with a dash '-'. You may specify any amount of commands
+to be run before and after the transfer. If the server returns failure for one
+of the commands, the entire operation will be aborted.
+.IP "-r/--range <range>"
+(HTTP/FTP)
+Retrieve a byte range (i.e a partial document) from a HTTP/1.1 or FTP
+server. Ranges can be specified in a number of ways.
+.RS
+.TP 10
+.B 0-499
+specifies the first 500 bytes
+.TP
+.B 500-999
+specifies the second 500 bytes
+.TP
+.B -500
+specifies the last 500 bytes
+.TP
+.B 9500
+specifies the bytes from offset 9500 and forward
+.TP
+.B 0-0,-1
+specifies the first and last byte only(*)(H)
+.TP
+.B 500-700,600-799
+specifies 300 bytes from offset 500(H)
+.TP
+.B 100-199,500-599
+specifies two separate 100 bytes ranges(*)(H)
+.RE
+
+(*) = NOTE that this will cause the server to reply with a multipart
+response!
+
+You should also be aware that many HTTP/1.1 servers do not have this feature
+enabled, so that when you attempt to get a range, you'll instead get the whole
+document.
+
+FTP range downloads only support the simple syntax 'start-stop' (optionally
+with one of the numbers omitted). It depends on the non-RFC command SIZE.
+.IP "-s/--silent"
+Silent mode. Don't show progress meter or error messages. Makes
+Curl mute.
+.IP "-S/--show-error"
+When used with -s it makes curl show error message if it fails.
+.IP "-t/--upload"
+Transfer the stdin data to the specified file. Curl will read
+everything from stdin until EOF and store with the supplied name. If
+this is used on a http(s) server, the PUT command will be used.
+.IP "-T/--upload-file <file>"
+Like -t, but this transfers the specified local file. If there is no
+file part in the specified URL, Curl will append the local file
+name. NOTE that you must use a trailing / on the last directory to
+really prove to Curl that there is no file name or curl will
+think that your last directory name is the remote file name to
+use. That will most likely cause the upload operation to fail. If
+this is used on a http(s) server, the PUT command will be used.
+.IP "-u/--user <user:password>"
+Specify user and password to use when fetching. See README.curl for detailed
+examples of how to use this. If no password is specified, curl will
+ask for it interactively.
+.IP "-U/--proxy-user <user:password>"
+Specify user and password to use for Proxy authentication. If no
+password is specified, curl will ask for it interactively.
+.IP "-v/--verbose"
+Makes the fetching more verbose/talkative. Mostly usable for
+debugging. Lines starting with '>' means data sent by curl, '<'
+means data received by curl that is hidden in normal cases and lines
+starting with '*' means additional info provided by curl.
+.IP "-V/--version"
+Displays the full version of curl, libcurl and other 3rd party libraries
+linked with the executable.
+.IP "-w/--write-out <format>"
+Defines what to display after a completed and successful operation. The format
+is a string that may contain plain text mixed with any number of variables. The
+string can be specified as "string", to get read from a particular file you
+specify it "@filename" and to tell curl to read the format from stdin you
+write "@-".
+
+The variables present in the output format will be substituted by the value or
+text that curl thinks fit, as described below. All variables are specified
+like %{variable_name} and to output a normal % you just write them like
+%%. You can output a newline by using \\n, a carrige return with \\r and a tab
+space with \\t.
+
+.B NOTE:
+The %-letter is a special letter in the win32-environment, where all
+occurrences of % must be doubled when using this option.
+
+Available variables are at this point:
+.RS
+.TP 15
+.B url_effective
+The URL that was fetched last. This is mostly meaningful if you've told curl
+to follow location: headers.
+.TP
+.B http_code
+The numerical code that was found in the last retrieved HTTP(S) page.
+.TP
+.B time_total
+The total time, in seconds, that the full operation lasted. The time will be
+displayed with millisecond resolution.
+.TP
+.B time_namelookup
+The time, in seconds, it took from the start until the name resolving was
+completed.
+.TP
+.B time_connect
+The time, in seconds, it took from the start until the connect to the remote
+host (or proxy) was completed.
+.TP
+.B time_pretransfer
+The time, in seconds, it took from the start until the file transfer is just
+about to begin. This includes all pre-transfer commands and negotiations that
+are specific to the particular protocol(s) involved.
+.TP
+.B size_download
+The total amount of bytes that were downloaded.
+.TP
+.B size_upload
+The total amount of bytes that were uploaded.
+.TP
+.B speed_download
+The average download speed that curl measured for the complete download.
+.TP
+.B speed_upload
+The average upload speed that curl measured for the complete download.
+.RE
+.IP "-x/--proxy <proxyhost[:port]>"
+Use specified proxy. If the port number is not specified, it is assumed at
+port 1080.
+.IP "-X/--request <command>"
+(HTTP)
+Specifies a custom request to use when communicating with the HTTP server.
+The specified request will be used instead of the standard GET. Read the
+HTTP 1.1 specification for details and explanations.
+
+(FTP)
+Specifies a custom FTP command to use instead of LIST when doing file lists
+with ftp.
+.IP "-y/--speed-time <time>"
+If a download is slower than speed-limit bytes per second during a speed-time
+period, the download gets aborted. If speed-time is used, the default
+speed-limit will be 1 unless set with -y.
+.IP "-Y/--speed-limit <speed>"
+If a download is slower than this given speed, in bytes per second, for
+speed-time seconds it gets aborted. speed-time is set with -Y and is 30 if
+not set.
+.IP "-z/--time-cond <date expression>"
+(HTTP)
+Request to get a file that has been modified later than the given time and
+date, or one that has been modified before that time. The date expression can
+be all sorts of date strings or if it doesn't match any internal ones, it
+tries to get the time from a given file name instead! See the
+.BR "GNU date(1)"
+man page for date expression details.
+
+Start the date expression with a dash (-) to make it request for a document
+that is older than the given date/time, default is a document that is newer
+than the specified date/time.
+.IP "-3/--sslv3"
+(HTTPS)
+Forces curl to use SSL version 3 when negotiating with a remote SSL server.
+.IP "-2/--sslv2"
+(HTTPS)
+Forces curl to use SSL version 2 when negotiating with a remote SSL server.
+.IP "-#/--progress-bar"
+Make curl display progress information as a progress bar instead of the
+default statistics.
+.IP "--crlf"
+(FTP) Convert LF to CRLF in upload. Useful for MVS (OS/390).
+.IP "--stderr <file>"
+Redirect all writes to stderr to the specified file instead. If the file name
+is a plain '-', it is instead written to stdout. This option has no point when
+you're using a shell with decent redirecting capabilities.
+.SH FILES
+.I ~/.curlrc
+.RS
+Default config file.
+
+.SH ENVIRONMENT
+.IP "HTTP_PROXY [protocol://]<host>[:port]"
+Sets proxy server to use for HTTP.
+.IP "HTTPS_PROXY [protocol://]<host>[:port]"
+Sets proxy server to use for HTTPS.
+.IP "FTP_PROXY [protocol://]<host>[:port]"
+Sets proxy server to use for FTP.
+.IP "GOPHER_PROXY [protocol://]<host>[:port]"
+Sets proxy server to use for GOPHER.
+.IP "ALL_PROXY [protocol://]<host>[:port]"
+Sets proxy server to use if no protocol-specific proxy is set.
+.IP "NO_PROXY <comma-separated list of hosts>"
+list of host names that shouldn't go through any proxy. If set to a
+asterisk '*' only, it matches all hosts.
+.IP "COLUMNS <integer>"
+The width of the terminal. This variable only affects curl when the
+--progress-bar option is used.
+.SH EXIT CODES
+There exists a bunch of different error codes and their corresponding error
+messages that may appear during bad conditions. At the time of this writing,
+the exit codes are:
+.IP 1
+Unsupported protocol. This build of curl has no support for this protocol.
+.IP 2
+Failed to initialize.
+.IP 3
+URL malformat. The syntax was not correct.
+.IP 4
+URL user malformatted. The user-part of the URL syntax was not correct.
+.IP 5
+Couldn't resolve proxy. The given proxy host could not be resolved.
+.IP 6
+Couldn't resolve host. The given remote host was not resolved.
+.IP 7
+Failed to connect to host.
+.IP 8
+FTP weird server reply. The server sent data curl couldn't parse.
+.IP 9
+FTP access denied. The server denied login.
+.IP 10
+FTP user/password incorrect. Either one or both were not accepted by the
+server.
+.IP 11
+FTP weird PASS reply. Curl couldn't parse the reply sent to the PASS request.
+.IP 12
+FTP weird USER reply. Curl couldn't parse the reply sent to the USER request.
+.IP 13
+FTP weird PASV reply, Curl couldn't parse the reply sent to the PASV request.
+.IP 14
+FTP weird 227 formay. Curl couldn't parse the 227-line the server sent.
+.IP 15
+FTP can't get host. Couldn't resolve the host IP we got in the 227-line.
+.IP 16
+FTP can't reconnect. Couldn't connect to the host we got in the 227-line.
+.IP 17
+FTP couldn't set binary. Couldn't change transfer method to binary.
+.IP 18
+Partial file. Only a part of the file was transfered.
+.IP 19
+FTP couldn't RETR file. The RETR command failed.
+.IP 20
+FTP write error. The transfer was reported bad by the server.
+.IP 21
+FTP quote error. A quote command returned error from the server.
+.IP 22
+HTTP not found. The requested page was not found. This return code only
+appears if --fail is used.
+.IP 23
+Write error. Curl couldn't write data to a local filesystem or similar.
+.IP 24
+Malformat user. User name badly specified.
+.IP 25
+FTP couldn't STOR file. The server denied the STOR operation.
+.IP 26
+Read error. Various reading problems.
+.IP 27
+Out of memory. A memory allocation request failed.
+.IP 28
+Operation timeout. The specified time-out period was reached according to the
+conditions.
+.IP 29
+FTP couldn't set ASCII. The server returned an unknown reply.
+.IP 30
+FTP PORT failed. The PORT command failed.
+.IP 31
+FTP couldn't use REST. The REST command failed.
+.IP 32
+FTP couldn't use SIZE. The SIZE command failed. The command is an extension
+to the original FTP spec RFC 959.
+.IP 33
+HTTP range error. The range "command" didn't work.
+.IP 34
+HTTP post error. Internal post-request generation error.
+.IP 35
+SSL connect error. The SSL handshaking failed.
+.IP 36
+FTP bad download resume. Couldn't continue an earlier aborted download.
+.IP 37
+FILE couldn't read file. Failed to open the file. Permissions?
+.IP 38
+LDAP cannot bind. LDAP bind operation failed.
+.IP 39
+LDAP search failed.
+.IP 40
+Library not found. The LDAP library was not found.
+.IP 41
+Function not found. A required LDAP function was not found.
+.IP XX
+There will appear more error codes here in future releases. The existing ones
+are meant to never change.
+.SH BUGS
+If you do find any (or have other suggestions), mail Daniel Stenberg
+<Daniel.Stenberg@haxx.nu>.
+.SH AUTHORS / CONTRIBUTORS
+ - Daniel Stenberg <Daniel.Stenberg@haxx.nu>
+ - Rafael Sagula <sagula@inf.ufrgs.br>
+ - Sampo Kellomaki <sampo@iki.fi>
+ - Linas Vepstas <linas@linas.org>
+ - Bjorn Reese <breese@mail1.stofanet.dk>
+ - Johan Anderson <johan@homemail.com>
+ - Kjell Ericson <Kjell.Ericson@haxx,nu>
+ - Troy Engel <tengel@sonic.net>
+ - Ryan Nelson <ryan@inch.com>
+ - Bjorn Stenberg <Bjorn.Stenberg@haxx.nu>
+ - Angus Mackay <amackay@gus.ml.org>
+ - Eric Young <eay@cryptsoft.com>
+ - Simon Dick <simond@totally.irrelevant.org>
+ - Oren Tirosh <oren@monty.hishome.net>
+ - Steven G. Johnson <stevenj@alum.mit.edu>
+ - Gilbert Ramirez Jr. <gram@verdict.uthscsa.edu>
+ - Andrés García <ornalux@redestb.es>
+ - Douglas E. Wegscheid <wegscd@whirlpool.com>
+ - Mark Butler <butlerm@xmission.com>
+ - Eric Thelin <eric@generation-i.com>
+ - Marc Boucher <marc@mbsi.ca>
+ - Greg Onufer <Greg.Onufer@Eng.Sun.COM>
+ - Doug Kaufman <dkaufman@rahul.net>
+ - David Eriksson <david@2good.com>
+ - Ralph Beckmann <rabe@uni-paderborn.de>
+ - T. Yamada <tai@imasy.or.jp>
+ - Lars J. Aas <larsa@sim.no>
+ - Jörn Hartroth <Joern.Hartroth@telekom.de>
+ - Matthew Clarke <clamat@van.maves.ca>
+ - Linus Nielsen <Linus.Nielsen@haxx.nu>
+ - Felix von Leitner <felix@convergence.de>
+ - Dan Zitter <dzitter@zitter.net>
+ - Jongki Suwandi <Jongki.Suwandi@eng.sun.com>
+ - Chris Maltby <chris@aurema.com>
+ - Ron Zapp <rzapper@yahoo.com>
+ - Paul Marquis <pmarquis@iname.com>
+ - Ellis Pritchard <ellis@citria.com>
+ - Damien Adant <dams@usa.net>
+ - Chris <cbayliss@csc.come>
+ - Marco G. Salvagno <mgs@whiz.cjb.net>
+.SH WWW
+http://curl.haxx.nu
+.SH FTP
+ftp://ftp.sunet.se/pub/www/utilities/curl/
+.SH "SEE ALSO"
+.BR ftp (1),
+.BR wget (1),
+.BR snarf (1)