1 \input texinfo @c -*-texinfo-*-
4 @documentencoding ISO-8859-1
6 @settitle GNU MP @value{VERSION}
11 @comment %**end of header
14 This manual describes how to install and use the GNU multiple precision
15 arithmetic library, version @value{VERSION}.
17 Copyright 1991, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
18 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Free Software Foundation, Inc.
20 Permission is granted to copy, distribute and/or modify this document under
21 the terms of the GNU Free Documentation License, Version 1.3 or any later
22 version published by the Free Software Foundation; with no Invariant Sections,
23 with the Front-Cover Texts being ``A GNU Manual'', and with the Back-Cover
24 Texts being ``You have freedom to copy and modify this GNU Manual, like GNU
25 software''. A copy of the license is included in
26 @ref{GNU Free Documentation License}.
28 @c Note the @ref above must be on one line, a line break in an @ref within
29 @c @copying will bomb in recent texinfo.tex (eg. 2004-04-07.08 which comes
30 @c with texinfo 4.7), with messages about missing @endcsname.
33 @c Texinfo version 4.2 or up will be needed to process this file.
35 @c The version number and edition number are taken from version.texi provided
36 @c by automake (note that it's regenerated only if you configure with
37 @c --enable-maintainer-mode).
39 @c Notes discussing the present version number of GMP in relation to previous
40 @c ones (for instance in the "Compatibility" section) must be updated at
43 @c @cindex entries have been made for function categories and programming
44 @c topics. The "mpn" section is not included in this, because a beginner
45 @c looking for "GCD" or something is only going to be confused by pointers to
46 @c low level routines.
48 @c @cindex entries are present for processors and systems when there's
49 @c particular notes concerning them, but not just for everything GMP
52 @c Index entries for files use @code rather than @file, @samp or @option,
53 @c since the latter come out with quotes in TeX, which are nice in the text
54 @c but don't look so good in index columns.
58 @c A suitable texinfo.tex is supplied, a newer one should work equally well.
62 @c Nothing special is done for links to external manuals, they just come out
63 @c in the usual makeinfo style, eg. "../libc/Locales.html". If you have
64 @c local copies of such manuals then this is a good thing, if not then you
65 @c may want to search-and-replace to some online source.
68 @dircategory GNU libraries
70 * gmp: (gmp). GNU Multiple Precision Arithmetic Library.
73 @c html <meta name="description" content="...">
75 How to install and use the GNU multiple precision arithmetic library, version @value{VERSION}.
76 @end documentdescription
83 @node Top, Copying, (dir), (dir)
90 @subtitle The GNU Multiple Precision Arithmetic Library
91 @subtitle Edition @value{EDITION}
92 @subtitle @value{UPDATED}
94 @author by Torbj@"orn Granlund and the GMP development team
95 @c @email{tg@@gmplib.org}
97 @c Include the Distribution inside the titlepage so
98 @c that headings are turned off.
101 \global\parindent=0pt
103 \global\baselineskip=13pt
107 @vskip 0pt plus 1filll
120 @c Don't bother with contents for html, the menus seem adequate.
126 * Copying:: GMP Copying Conditions (LGPL).
127 * Introduction to GMP:: Brief introduction to GNU MP.
128 * Installing GMP:: How to configure and compile the GMP library.
129 * GMP Basics:: What every GMP user should know.
130 * Reporting Bugs:: How to usefully report bugs.
131 * Integer Functions:: Functions for arithmetic on signed integers.
132 * Rational Number Functions:: Functions for arithmetic on rational numbers.
133 * Floating-point Functions:: Functions for arithmetic on floats.
134 * Low-level Functions:: Fast functions for natural numbers.
135 * Random Number Functions:: Functions for generating random numbers.
136 * Formatted Output:: @code{printf} style output.
137 * Formatted Input:: @code{scanf} style input.
138 * C++ Class Interface:: Class wrappers around GMP types.
139 * BSD Compatible Functions:: All functions found in BSD MP.
140 * Custom Allocation:: How to customize the internal allocation.
141 * Language Bindings:: Using GMP from other languages.
142 * Algorithms:: What happens behind the scenes.
143 * Internals:: How values are represented behind the scenes.
145 * Contributors:: Who brings you this library?
146 * References:: Some useful papers and books to read.
147 * GNU Free Documentation License::
153 @c @m{T,N} is $T$ in tex or @math{N} otherwise. This is an easy way to give
154 @c different forms for math in tex and info. Commas in N or T don't work,
155 @c but @C{} can be used instead. \, works in info but not in tex.
171 @c @ms{V,N} is $V_N$ in tex or just vn otherwise. This suits simple
172 @c subscripts like @ms{x,0}.
175 @tex$\V\_{\N\}$@end tex
184 @c @nicode{S} is plain S in info, or @code{S} elsewhere. This can be used
185 @c when the quotes that @code{} gives in info aren't wanted, but the
186 @c fontification in tex or html is wanted. Doesn't work as @nicode{'\\0'}
187 @c though (gives two backslashes in tex).
199 @c @nisamp{S} is plain S in info, or @samp{S} elsewhere. This can be used
200 @c when the quotes that @samp{} gives in info aren't wanted, but the
201 @c fontification in tex or html is wanted.
213 @c Usage: @GMPtimes{}
214 @c Give either \times or the word "times".
216 \gdef\GMPtimes{\times}
224 @c Usage: @GMPmultiply{}
225 @c Give * in info, or nothing in tex.
236 @c Give either |x| in tex, or abs(x) in info or html.
246 @c Usage: @GMPfloor{x}
247 @c Give either \lfloor x\rfloor in tex, or floor(x) in info or html.
249 \gdef\GMPfloor#1{\lfloor #1\rfloor}
257 @c Usage: @GMPceil{x}
258 @c Give either \lceil x\rceil in tex, or ceil(x) in info or html.
260 \gdef\GMPceil#1{\lceil #1 \rceil}
268 @c Math operators already available in tex, made available in info too.
269 @c For example @bmod{} can be used in both tex and info.
297 @c New math operators.
298 @c @abs{} can be used in both tex and info, or just \abs in tex.
300 \gdef\abs{\mathop{\rm abs}}
308 @c @cross{} is a \times symbol in tex, or an "x" in info. In tex it works
309 @c inside or outside $ $.
311 \gdef\cross{\ifmmode\times\else$\times$\fi}
319 @c @times{} made available as a "*" in info and html (already works in tex).
327 @c Like @w{} but working in math mode too.
329 \gdef\W#1{\ifmmode{#1}\else\w{#1}\fi}
337 @c Usage: \GMPdisplay{text}
338 @c Put the given text in an @display style indent, but without turning off
339 @c paragraph reflow etc.
343 \advance\leftskip by \lispnarrowing
348 @c A new \hat that will work in math mode, unlike the texinfo redefined
351 \gdef\GMPhat{\mathaccent"705E}
354 @c Usage: \GMPraise{text}
355 @c For use in a $ $ math expression as an alternative to "^". This is good
356 @c for @code{} in an exponent, since there seems to be no superscript font
359 \gdef\GMPraise#1{\mskip0.5\thinmuskip\hbox{\raise0.8ex\hbox{#1}}}
362 @c Usage: @texlinebreak{}
363 @c A line break as per @*, but only in tex.
374 @c Usage: @maybepagebreak
375 @c Allow tex to insert a page break, if it feels the urge.
376 @c Normally blocks of @deftypefun/funx are kept together, which can lead to
377 @c some poor page break positioning if it's a big block, like the sets of
378 @c division functions etc.
380 \gdef\maybepagebreak{\penalty0}
383 @macro maybepagebreak
387 @c Usage: @GMPreftop{info,title}
388 @c Usage: @GMPpxreftop{info,title}
390 @c Like @ref{} and @pxref{}, but designed for a reference to the top of a
391 @c document, not a particular section. The TeX output for plain @ref insists
392 @c on printing a particular section, GMPreftop gives just the title.
394 @c The texinfo manual recommends putting a likely section name in references
395 @c like this, eg. "Introduction", but it seems better to just give the title.
398 @macro GMPreftop{info,title}
401 @macro GMPpxreftop{info,title}
407 @macro GMPreftop{info,title}
408 @ref{Top,\title\,\title\,\info\,\title\}
410 @macro GMPpxreftop{info,title}
411 @pxref{Top,\title\,\title\,\info\,\title\}
416 @node Copying, Introduction to GMP, Top, Top
417 @comment node-name, next, previous, up
418 @unnumbered GNU MP Copying Conditions
419 @cindex Copying conditions
420 @cindex Conditions for copying GNU MP
421 @cindex License conditions
423 This library is @dfn{free}; this means that everyone is free to use it and
424 free to redistribute it on a free basis. The library is not in the public
425 domain; it is copyrighted and there are restrictions on its distribution, but
426 these restrictions are designed to permit everything that a good cooperating
427 citizen would want to do. What is not allowed is to try to prevent others
428 from further sharing any version of this library that they might get from
431 Specifically, we want to make sure that you have the right to give away copies
432 of the library, that you receive source code or else can get it if you want
433 it, that you can change this library or use pieces of it in new free programs,
434 and that you know you can do these things.@refill
436 To make sure that everyone has such rights, we have to forbid you to deprive
437 anyone else of these rights. For example, if you distribute copies of the GNU
438 MP library, you must give the recipients all the rights that you have. You
439 must make sure that they, too, receive or can get the source code. And you
440 must tell them their rights.@refill
442 Also, for our own protection, we must make certain that everyone finds out
443 that there is no warranty for the GNU MP library. If it is modified by
444 someone else and passed on, we want their recipients to know that what they
445 have is not what we distributed, so that any problems introduced by others
446 will not reflect on our reputation.@refill
448 The precise conditions of the license for the GNU MP library are found in the
449 Lesser General Public License version 3 that accompanies the source code,
450 see @file{COPYING.LIB}. Certain demonstration programs are provided under the
451 terms of the plain General Public License version 3, see @file{COPYING}.
454 @node Introduction to GMP, Installing GMP, Copying, Top
455 @comment node-name, next, previous, up
456 @chapter Introduction to GNU MP
459 GNU MP is a portable library written in C for arbitrary precision arithmetic
460 on integers, rational numbers, and floating-point numbers. It aims to provide
461 the fastest possible arithmetic for all applications that need higher
462 precision than is directly supported by the basic C types.
464 Many applications use just a few hundred bits of precision; but some
465 applications may need thousands or even millions of bits. GMP is designed to
466 give good performance for both, by choosing algorithms based on the sizes of
467 the operands, and by carefully keeping the overhead at a minimum.
469 The speed of GMP is achieved by using fullwords as the basic arithmetic type,
470 by using sophisticated algorithms, by including carefully optimized assembly
471 code for the most common inner loops for many different CPUs, and by a general
472 emphasis on speed (as opposed to simplicity or elegance).
474 There is assembly code for these CPUs:
477 DEC Alpha 21064, 21164, and 21264,
479 AMD K6, K6-2, Athlon, and Athlon64,
480 Hitachi SuperH and SH-2,
481 HPPA 1.0, 1.1 and 2.0,
482 Intel Pentium, Pentium Pro/II/III, Pentium 4, generic x86,
484 Motorola MC68000, MC68020, MC88100, and MC88110,
485 Motorola/IBM PowerPC 32 and 64,
489 SPARCv7, SuperSPARC, generic SPARCv8, UltraSPARC,
493 Some optimizations also for
503 For up-to-date information on GMP, please see the GMP web pages at
506 @uref{http://gmplib.org/}
509 @cindex Latest version of GMP
510 @cindex Anonymous FTP of latest version
511 @cindex FTP of latest version
513 The latest version of the library is available at
516 @uref{ftp://ftp.gnu.org/gnu/gmp/}
519 Many sites around the world mirror @samp{ftp.gnu.org}, please use a mirror
520 near you, see @uref{http://www.gnu.org/order/ftp.html} for a full list.
522 @cindex Mailing lists
523 There are three public mailing lists of interest. One for release
524 announcements, one for general questions and discussions about usage of the GMP
525 library and one for bug reports. For more information, see
528 @uref{http://gmplib.org/mailman/listinfo/}.
531 The proper place for bug reports is @email{gmp-bugs@@gmplib.org}. See
532 @ref{Reporting Bugs} for information about reporting bugs.
535 @section How to use this Manual
536 @cindex About this manual
538 Everyone should read @ref{GMP Basics}. If you need to install the library
539 yourself, then read @ref{Installing GMP}. If you have a system with multiple
540 ABIs, then read @ref{ABI and ISA}, for the compiler options that must be used
543 The rest of the manual can be used for later reference, although it is
544 probably a good idea to glance through it.
547 @node Installing GMP, GMP Basics, Introduction to GMP, Top
548 @comment node-name, next, previous, up
549 @chapter Installing GMP
550 @cindex Installing GMP
551 @cindex Configuring GMP
554 GMP has an autoconf/automake/libtool based configuration system. On a
555 Unix-like system a basic build can be done with
563 Some self-tests can be run with
570 And you can install (under @file{/usr/local} by default) with
576 If you experience problems, please report them to @email{gmp-bugs@@gmplib.org}.
577 See @ref{Reporting Bugs}, for information on what to include in useful bug
583 * Notes for Package Builds::
584 * Notes for Particular Systems::
585 * Known Build Problems::
586 * Performance optimization::
590 @node Build Options, ABI and ISA, Installing GMP, Installing GMP
591 @section Build Options
592 @cindex Build options
594 All the usual autoconf configure options are available, run @samp{./configure
595 --help} for a summary. The file @file{INSTALL.autoconf} has some generic
596 installation information too.
600 @cindex Non-Unix systems
601 @samp{configure} requires various Unix-like tools. See @ref{Notes for
602 Particular Systems}, for some options on non-Unix systems.
604 It might be possible to build without the help of @samp{configure}, certainly
605 all the code is there, but unfortunately you'll be on your own.
607 @item Build Directory
608 @cindex Build directory
609 To compile in a separate build directory, @command{cd} to that directory, and
610 prefix the configure command with the path to the GMP source directory. For
615 /my/sources/gmp-@value{VERSION}/configure
618 Not all @samp{make} programs have the necessary features (@code{VPATH}) to
619 support this. In particular, SunOS and Slowaris @command{make} have bugs that
620 make them unable to build in a separate directory. Use GNU @command{make}
623 @item @option{--prefix} and @option{--exec-prefix}
626 @cindex Install prefix
627 @cindex @code{--prefix}
628 @cindex @code{--exec-prefix}
629 The @option{--prefix} option can be used in the normal way to direct GMP to
630 install under a particular tree. The default is @samp{/usr/local}.
632 @option{--exec-prefix} can be used to direct architecture-dependent files like
633 @file{libgmp.a} to a different location. This can be used to share
634 architecture-independent parts like the documentation, but separate the
635 dependent parts. Note however that @file{gmp.h} and @file{mp.h} are
636 architecture-dependent since they encode certain aspects of @file{libgmp}, so
637 it will be necessary to ensure both @file{$prefix/include} and
638 @file{$exec_prefix/include} are available to the compiler.
640 @item @option{--disable-shared}, @option{--disable-static}
641 @cindex @code{--disable-shared}
642 @cindex @code{--disable-static}
643 By default both shared and static libraries are built (where possible), but
644 one or other can be disabled. Shared libraries result in smaller executables
645 and permit code sharing between separate running processes, but on some CPUs
646 are slightly slower, having a small cost on each function call.
648 @item Native Compilation, @option{--build=CPU-VENDOR-OS}
649 @cindex Native compilation
651 @cindex @code{--build}
652 For normal native compilation, the system can be specified with
653 @samp{--build}. By default @samp{./configure} uses the output from running
654 @samp{./config.guess}. On some systems @samp{./config.guess} can determine
655 the exact CPU type, on others it will be necessary to give it explicitly. For
659 ./configure --build=ultrasparc-sun-solaris2.7
662 In all cases the @samp{OS} part is important, since it controls how libtool
663 generates shared libraries. Running @samp{./config.guess} is the simplest way
664 to see what it should be, if you don't know already.
666 @item Cross Compilation, @option{--host=CPU-VENDOR-OS}
667 @cindex Cross compiling
669 @cindex @code{--host}
670 When cross-compiling, the system used for compiling is given by @samp{--build}
671 and the system where the library will run is given by @samp{--host}. For
672 example when using a FreeBSD Athlon system to build GNU/Linux m68k binaries,
675 ./configure --build=athlon-pc-freebsd3.5 --host=m68k-mac-linux-gnu
678 Compiler tools are sought first with the host system type as a prefix. For
679 example @command{m68k-mac-linux-gnu-ranlib} is tried, then plain
680 @command{ranlib}. This makes it possible for a set of cross-compiling tools
681 to co-exist with native tools. The prefix is the argument to @samp{--host},
682 and this can be an alias, such as @samp{m68k-linux}. But note that tools
683 don't have to be setup this way, it's enough to just have a @env{PATH} with a
684 suitable cross-compiling @command{cc} etc.
686 Compiling for a different CPU in the same family as the build system is a form
687 of cross-compilation, though very possibly this would merely be special
688 options on a native compiler. In any case @samp{./configure} avoids depending
689 on being able to run code on the build system, which is important when
690 creating binaries for a newer CPU since they very possibly won't run on the
693 In all cases the compiler must be able to produce an executable (of whatever
694 format) from a standard C @code{main}. Although only object files will go to
695 make up @file{libgmp}, @samp{./configure} uses linking tests for various
696 purposes, such as determining what functions are available on the host system.
698 Currently a warning is given unless an explicit @samp{--build} is used when
699 cross-compiling, because it may not be possible to correctly guess the build
700 system type if the @env{PATH} has only a cross-compiling @command{cc}.
702 Note that the @samp{--target} option is not appropriate for GMP@. It's for use
703 when building compiler tools, with @samp{--host} being where they will run,
704 and @samp{--target} what they'll produce code for. Ordinary programs or
705 libraries like GMP are only interested in the @samp{--host} part, being where
706 they'll run. (Some past versions of GMP used @samp{--target} incorrectly.)
710 In general, if you want a library that runs as fast as possible, you should
711 configure GMP for the exact CPU type your system uses. However, this may mean
712 the binaries won't run on older members of the family, and might run slower on
713 other members, older or newer. The best idea is always to build GMP for the
714 exact machine type you intend to run it on.
716 The following CPUs have specific support. See @file{configure.in} for details
717 of what code and compiler options they select.
721 @c Keep this formatting, it's easy to read and it can be grepped to
722 @c automatically test that CPUs listed get through ./config.sub
796 @nisamp{powerpc603e},
798 @nisamp{powerpc604e},
802 @nisamp{powerpc7400},
803 @nisamp{powerpc7450},
819 @nisamp{ultrasparc2},
820 @nisamp{ultrasparc2i},
821 @nisamp{ultrasparc3},
857 CPUs not listed will use generic C code.
859 @item Generic C Build
861 If some of the assembly code causes problems, or if otherwise desired, the
862 generic C code can be selected with CPU @samp{none}. For example,
865 ./configure --host=none-unknown-freebsd3.5
868 Note that this will run quite slowly, but it should be portable and should at
869 least make it possible to get something running if all else fails.
871 @item Fat binary, @option{--enable-fat}
873 @cindex @option{--enable-fat}
874 Using @option{--enable-fat} selects a ``fat binary'' build on x86, where
875 optimized low level subroutines are chosen at runtime according to the CPU
876 detected. This means more code, but gives good performance on all x86 chips.
877 (This option might become available for more architectures in the future.)
881 On some systems GMP supports multiple ABIs (application binary interfaces),
882 meaning data type sizes and calling conventions. By default GMP chooses the
883 best ABI available, but a particular ABI can be selected. For example
886 ./configure --host=mips64-sgi-irix6 ABI=n32
889 See @ref{ABI and ISA}, for the available choices on relevant CPUs, and what
890 applications need to do.
892 @item @option{CC}, @option{CFLAGS}
895 @cindex @code{CFLAGS}
896 By default the C compiler used is chosen from among some likely candidates,
897 with @command{gcc} normally preferred if it's present. The usual
898 @samp{CC=whatever} can be passed to @samp{./configure} to choose something
901 For various systems, default compiler flags are set based on the CPU and
902 compiler. The usual @samp{CFLAGS="-whatever"} can be passed to
903 @samp{./configure} to use something different or to set good flags for systems
904 GMP doesn't otherwise know.
906 The @samp{CC} and @samp{CFLAGS} used are printed during @samp{./configure},
907 and can be found in each generated @file{Makefile}. This is the easiest way
908 to check the defaults when considering changing or adding something.
910 Note that when @samp{CC} and @samp{CFLAGS} are specified on a system
911 supporting multiple ABIs it's important to give an explicit
912 @samp{ABI=whatever}, since GMP can't determine the ABI just from the flags and
913 won't be able to select the correct assembly code.
915 If just @samp{CC} is selected then normal default @samp{CFLAGS} for that
916 compiler will be used (if GMP recognises it). For example @samp{CC=gcc} can
917 be used to force the use of GCC, with default flags (and default ABI).
919 @item @option{CPPFLAGS}
920 @cindex @code{CPPFLAGS}
921 Any flags like @samp{-D} defines or @samp{-I} includes required by the
922 preprocessor should be set in @samp{CPPFLAGS} rather than @samp{CFLAGS}.
923 Compiling is done with both @samp{CPPFLAGS} and @samp{CFLAGS}, but
924 preprocessing uses just @samp{CPPFLAGS}. This distinction is because most
925 preprocessors won't accept all the flags the compiler does. Preprocessing is
926 done separately in some configure tests, and in the @samp{ansi2knr} support
929 @item @option{CC_FOR_BUILD}
930 @cindex @code{CC_FOR_BUILD}
931 Some build-time programs are compiled and run to generate host-specific data
932 tables. @samp{CC_FOR_BUILD} is the compiler used for this. It doesn't need
933 to be in any particular ABI or mode, it merely needs to generate executables
934 that can run. The default is to try the selected @samp{CC} and some likely
935 candidates such as @samp{cc} and @samp{gcc}, looking for something that works.
937 No flags are used with @samp{CC_FOR_BUILD} because a simple invocation like
938 @samp{cc foo.c} should be enough. If some particular options are required
939 they can be included as for instance @samp{CC_FOR_BUILD="cc -whatever"}.
941 @item C++ Support, @option{--enable-cxx}
943 @cindex @code{--enable-cxx}
944 C++ support in GMP can be enabled with @samp{--enable-cxx}, in which case a
945 C++ compiler will be required. As a convenience @samp{--enable-cxx=detect}
946 can be used to enable C++ support only if a compiler can be found. The C++
947 support consists of a library @file{libgmpxx.la} and header file
948 @file{gmpxx.h} (@pxref{Headers and Libraries}).
950 A separate @file{libgmpxx.la} has been adopted rather than having C++ objects
951 within @file{libgmp.la} in order to ensure dynamic linked C programs aren't
952 bloated by a dependency on the C++ standard library, and to avoid any chance
953 that the C++ compiler could be required when linking plain C programs.
955 @file{libgmpxx.la} will use certain internals from @file{libgmp.la} and can
956 only be expected to work with @file{libgmp.la} from the same GMP version.
957 Future changes to the relevant internals will be accompanied by renaming, so a
958 mismatch will cause unresolved symbols rather than perhaps mysterious
961 In general @file{libgmpxx.la} will be usable only with the C++ compiler that
962 built it, since name mangling and runtime support are usually incompatible
963 between different compilers.
965 @item @option{CXX}, @option{CXXFLAGS}
968 @cindex @code{CXXFLAGS}
969 When C++ support is enabled, the C++ compiler and its flags can be set with
970 variables @samp{CXX} and @samp{CXXFLAGS} in the usual way. The default for
971 @samp{CXX} is the first compiler that works from a list of likely candidates,
972 with @command{g++} normally preferred when available. The default for
973 @samp{CXXFLAGS} is to try @samp{CFLAGS}, @samp{CFLAGS} without @samp{-g}, then
974 for @command{g++} either @samp{-g -O2} or @samp{-O2}, or for other compilers
975 @samp{-g} or nothing. Trying @samp{CFLAGS} this way is convenient when using
976 @samp{gcc} and @samp{g++} together, since the flags for @samp{gcc} will
977 usually suit @samp{g++}.
979 It's important that the C and C++ compilers match, meaning their startup and
980 runtime support routines are compatible and that they generate code in the
981 same ABI (if there's a choice of ABIs on the system). @samp{./configure}
982 isn't currently able to check these things very well itself, so for that
983 reason @samp{--disable-cxx} is the default, to avoid a build failure due to a
984 compiler mismatch. Perhaps this will change in the future.
986 Incidentally, it's normally not good enough to set @samp{CXX} to the same as
987 @samp{CC}. Although @command{gcc} for instance recognises @file{foo.cc} as
988 C++ code, only @command{g++} will invoke the linker the right way when
989 building an executable or shared library from C++ object files.
991 @item Temporary Memory, @option{--enable-alloca=<choice>}
992 @cindex Temporary memory
993 @cindex Stack overflow
994 @cindex @code{alloca}
995 @cindex @code{--enable-alloca}
996 GMP allocates temporary workspace using one of the following three methods,
997 which can be selected with for instance
998 @samp{--enable-alloca=malloc-reentrant}.
1002 @samp{alloca} - C library or compiler builtin.
1004 @samp{malloc-reentrant} - the heap, in a re-entrant fashion.
1006 @samp{malloc-notreentrant} - the heap, with global variables.
1009 For convenience, the following choices are also available.
1010 @samp{--disable-alloca} is the same as @samp{no}.
1014 @samp{yes} - a synonym for @samp{alloca}.
1016 @samp{no} - a synonym for @samp{malloc-reentrant}.
1018 @samp{reentrant} - @code{alloca} if available, otherwise
1019 @samp{malloc-reentrant}. This is the default.
1021 @samp{notreentrant} - @code{alloca} if available, otherwise
1022 @samp{malloc-notreentrant}.
1025 @code{alloca} is reentrant and fast, and is recommended. It actually allocates
1026 just small blocks on the stack; larger ones use malloc-reentrant.
1028 @samp{malloc-reentrant} is, as the name suggests, reentrant and thread safe,
1029 but @samp{malloc-notreentrant} is faster and should be used if reentrancy is
1032 The two malloc methods in fact use the memory allocation functions selected by
1033 @code{mp_set_memory_functions}, these being @code{malloc} and friends by
1034 default. @xref{Custom Allocation}.
1036 An additional choice @samp{--enable-alloca=debug} is available, to help when
1037 debugging memory related problems (@pxref{Debugging}).
1039 @item FFT Multiplication, @option{--disable-fft}
1040 @cindex FFT multiplication
1041 @cindex @code{--disable-fft}
1042 By default multiplications are done using Karatsuba, 3-way Toom, and
1043 Fermat FFT@. The FFT is only used on large to very large operands and can be
1044 disabled to save code size if desired.
1046 @item Berkeley MP, @option{--enable-mpbsd}
1047 @cindex Berkeley MP compatible functions
1048 @cindex BSD MP compatible functions
1049 @cindex @code{--enable-mpbsd}
1050 The Berkeley MP compatibility library (@file{libmp}) and header file
1051 (@file{mp.h}) are built and installed only if @option{--enable-mpbsd} is used.
1052 @xref{BSD Compatible Functions}.
1054 @item Assertion Checking, @option{--enable-assert}
1055 @cindex Assertion checking
1056 @cindex @code{--enable-assert}
1057 This option enables some consistency checking within the library. This can be
1058 of use while debugging, @pxref{Debugging}.
1060 @item Execution Profiling, @option{--enable-profiling=prof/gprof/instrument}
1061 @cindex Execution profiling
1062 @cindex @code{--enable-profiling}
1063 Enable profiling support, in one of various styles, @pxref{Profiling}.
1065 @item @option{MPN_PATH}
1066 @cindex @code{MPN_PATH}
1067 Various assembly versions of each mpn subroutines are provided. For a given
1068 CPU, a search is made though a path to choose a version of each. For example
1072 MPN_PATH="sparc32/v8 sparc32 generic"
1075 which means look first for v8 code, then plain sparc32 (which is v7), and
1076 finally fall back on generic C@. Knowledgeable users with special requirements
1077 can specify a different path. Normally this is completely unnecessary.
1080 @cindex Documentation formats
1082 The source for the document you're now reading is @file{doc/gmp.texi}, in
1083 Texinfo format, see @GMPreftop{texinfo, Texinfo}.
1088 Info format @samp{doc/gmp.info} is included in the distribution. The usual
1089 automake targets are available to make PostScript, DVI, PDF and HTML (these
1090 will require various @TeX{} and Texinfo tools).
1094 DocBook and XML can be generated by the Texinfo @command{makeinfo} program
1095 too, see @ref{makeinfo options,, Options for @command{makeinfo}, texinfo,
1098 Some supplementary notes can also be found in the @file{doc} subdirectory.
1104 @node ABI and ISA, Notes for Package Builds, Build Options, Installing GMP
1105 @section ABI and ISA
1107 @cindex Application Binary Interface
1109 @cindex Instruction Set Architecture
1111 ABI (Application Binary Interface) refers to the calling conventions between
1112 functions, meaning what registers are used and what sizes the various C data
1113 types are. ISA (Instruction Set Architecture) refers to the instructions and
1114 registers a CPU has available.
1116 Some 64-bit ISA CPUs have both a 64-bit ABI and a 32-bit ABI defined, the
1117 latter for compatibility with older CPUs in the family. GMP supports some
1118 CPUs like this in both ABIs. In fact within GMP @samp{ABI} means a
1119 combination of chip ABI, plus how GMP chooses to use it. For example in some
1120 32-bit ABIs, GMP may support a limb as either a 32-bit @code{long} or a 64-bit
1123 By default GMP chooses the best ABI available for a given system, and this
1124 generally gives significantly greater speed. But an ABI can be chosen
1125 explicitly to make GMP compatible with other libraries, or particular
1126 application requirements. For example,
1132 In all cases it's vital that all object code used in a given program is
1133 compiled for the same ABI.
1135 Usually a limb is implemented as a @code{long}. When a @code{long long} limb
1136 is used this is encoded in the generated @file{gmp.h}. This is convenient for
1137 applications, but it does mean that @file{gmp.h} will vary, and can't be just
1138 copied around. @file{gmp.h} remains compiler independent though, since all
1139 compilers for a particular ABI will be expected to use the same limb type.
1141 Currently no attempt is made to follow whatever conventions a system has for
1142 installing library or header files built for a particular ABI@. This will
1143 probably only matter when installing multiple builds of GMP, and it might be
1144 as simple as configuring with a special @samp{libdir}, or it might require
1145 more than that. Note that builds for different ABIs need to done separately,
1146 with a fresh @command{./configure} and @command{make} each.
1151 @item AMD64 (@samp{x86_64})
1153 On AMD64 systems supporting both 32-bit and 64-bit modes for applications, the
1154 following ABI choices are available.
1158 The 64-bit ABI uses 64-bit limbs and pointers and makes full use of the chip
1159 architecture. This is the default. Applications will usually not need
1160 special compiler flags, but for reference the option is
1167 The 32-bit ABI is the usual i386 conventions. This will be slower, and is not
1168 recommended except for inter-operating with other code not yet 64-bit capable.
1169 Applications must be compiled with
1175 (In GCC 2.95 and earlier there's no @samp{-m32} option, it's the only mode.)
1180 @item HPPA 2.0 (@samp{hppa2.0*}, @samp{hppa64})
1184 @item @samp{ABI=2.0w}
1185 The 2.0w ABI uses 64-bit limbs and pointers and is available on HP-UX 11 or
1186 up. Applications must be compiled with
1189 gcc [built for 2.0w]
1193 @item @samp{ABI=2.0n}
1194 The 2.0n ABI means the 32-bit HPPA 1.0 ABI and all its normal calling
1195 conventions, but with 64-bit instructions permitted within functions. GMP
1196 uses a 64-bit @code{long long} for a limb. This ABI is available on hppa64
1197 GNU/Linux and on HP-UX 10 or higher. Applications must be compiled with
1200 gcc [built for 2.0n]
1204 Note that current versions of GCC (eg.@: 3.2) don't generate 64-bit
1205 instructions for @code{long long} operations and so may be slower than for
1206 2.0w. (The GMP assembly code is the same though.)
1208 @item @samp{ABI=1.0}
1209 HPPA 2.0 CPUs can run all HPPA 1.0 and 1.1 code in the 32-bit HPPA 1.0 ABI@.
1210 No special compiler options are needed for applications.
1213 All three ABIs are available for CPU types @samp{hppa2.0w}, @samp{hppa2.0} and
1214 @samp{hppa64}, but for CPU type @samp{hppa2.0n} only 2.0n or 1.0 are
1217 Note that GCC on HP-UX has no options to choose between 2.0n and 2.0w modes,
1218 unlike HP @command{cc}. Instead it must be built for one or the other ABI@.
1219 GMP will detect how it was built, and skip to the corresponding @samp{ABI}.
1223 @item IA-64 under HP-UX (@samp{ia64*-*-hpux*}, @samp{itanium*-*-hpux*})
1226 HP-UX supports two ABIs for IA-64. GMP performance is the same in both.
1230 In the 32-bit ABI, pointers, @code{int}s and @code{long}s are 32 bits and GMP
1231 uses a 64 bit @code{long long} for a limb. Applications can be compiled
1232 without any special flags since this ABI is the default in both HP C and GCC,
1233 but for reference the flags are
1241 In the 64-bit ABI, @code{long}s and pointers are 64 bits and GMP uses a
1242 @code{long} for a limb. Applications must be compiled with
1250 On other IA-64 systems, GNU/Linux for instance, @samp{ABI=64} is the only
1255 @item MIPS under IRIX 6 (@samp{mips*-*-irix[6789]})
1258 IRIX 6 always has a 64-bit MIPS 3 or better CPU, and supports ABIs o32, n32,
1259 and 64. n32 or 64 are recommended, and GMP performance will be the same in
1260 each. The default is n32.
1263 @item @samp{ABI=o32}
1264 The o32 ABI is 32-bit pointers and integers, and no 64-bit operations. GMP
1265 will be slower than in n32 or 64, this option only exists to support old
1266 compilers, eg.@: GCC 2.7.2. Applications can be compiled with no special
1267 flags on an old compiler, or on a newer compiler with
1274 @item @samp{ABI=n32}
1275 The n32 ABI is 32-bit pointers and integers, but with a 64-bit limb using a
1276 @code{long long}. Applications must be compiled with
1284 The 64-bit ABI is 64-bit pointers and integers. Applications must be compiled
1293 Note that MIPS GNU/Linux, as of kernel version 2.2, doesn't have the necessary
1294 support for n32 or 64 and so only gets a 32-bit limb and the MIPS 2 code.
1298 @item PowerPC 64 (@samp{powerpc64}, @samp{powerpc620}, @samp{powerpc630}, @samp{powerpc970}, @samp{power4}, @samp{power5})
1301 @item @samp{ABI=aix64}
1303 The AIX 64 ABI uses 64-bit limbs and pointers and is the default on PowerPC 64
1304 @samp{*-*-aix*} systems. Applications must be compiled with
1311 @item @samp{ABI=mode64}
1312 The @samp{mode64} ABI uses 64-bit limbs and pointers, and is the default on
1313 64-bit GNU/Linux, BSD, and Mac OS X/Darwin systems. Applications must be
1320 @item @samp{ABI=mode32}
1322 The @samp{mode32} ABI uses a 64-bit @code{long long} limb but with the chip
1323 still in 32-bit mode and using 32-bit calling conventions. This is the default
1324 on for systems where the true 64-bit ABIs are unavailable. No special compiler
1325 options are needed for applications.
1328 This is the basic 32-bit PowerPC ABI, with a 32-bit limb. No special compiler
1329 options are needed for applications.
1332 GMP speed is greatest in @samp{aix64} and @samp{mode32}. In @samp{ABI=32}
1333 only the 32-bit ISA is used and this doesn't make full use of a 64-bit chip.
1334 On a suitable system we could perhaps use more of the ISA, but there are no
1339 @item Sparc V9 (@samp{sparc64}, @samp{sparcv9}, @samp{ultrasparc*})
1345 The 64-bit V9 ABI is available on the various BSD sparc64 ports, recent
1346 versions of Sparc64 GNU/Linux, and Solaris 2.7 and up (when the kernel is in
1347 64-bit mode). GCC 3.2 or higher, or Sun @command{cc} is required. On
1348 GNU/Linux, depending on the default @command{gcc} mode, applications must be
1355 On Solaris applications must be compiled with
1358 gcc -m64 -mptr64 -Wa,-xarch=v9 -mcpu=v9
1362 On the BSD sparc64 systems no special options are required, since 64-bits is
1363 the only ABI available.
1366 For the basic 32-bit ABI, GMP still uses as much of the V9 ISA as it can. In
1367 the Sun documentation this combination is known as ``v8plus''. On GNU/Linux,
1368 depending on the default @command{gcc} mode, applications may need to be
1375 On Solaris, no special compiler options are required for applications, though
1376 using something like the following is recommended. (@command{gcc} 2.8 and
1377 earlier only support @samp{-mv8} though.)
1385 GMP speed is greatest in @samp{ABI=64}, so it's the default where available.
1386 The speed is partly because there are extra registers available and partly
1387 because 64-bits is considered the more important case and has therefore had
1388 better code written for it.
1390 Don't be confused by the names of the @samp{-m} and @samp{-x} compiler
1391 options, they're called @samp{arch} but effectively control both ABI and ISA@.
1393 On Solaris 2.6 and earlier, only @samp{ABI=32} is available since the kernel
1394 doesn't save all registers.
1396 On Solaris 2.7 with the kernel in 32-bit mode, a normal native build will
1397 reject @samp{ABI=64} because the resulting executables won't run.
1398 @samp{ABI=64} can still be built if desired by making it look like a
1399 cross-compile, for example
1402 ./configure --build=none --host=sparcv9-sun-solaris2.7 ABI=64
1408 @node Notes for Package Builds, Notes for Particular Systems, ABI and ISA, Installing GMP
1409 @section Notes for Package Builds
1410 @cindex Build notes for binary packaging
1411 @cindex Packaged builds
1413 GMP should present no great difficulties for packaging in a binary
1416 @cindex Libtool versioning
1417 @cindex Shared library versioning
1418 Libtool is used to build the library and @samp{-version-info} is set
1419 appropriately, having started from @samp{3:0:0} in GMP 3.0 (@pxref{Versioning,
1420 Library interface versions, Library interface versions, libtool, GNU
1423 The GMP 4 series will be upwardly binary compatible in each release and will
1424 be upwardly binary compatible with all of the GMP 3 series. Additional
1425 function interfaces may be added in each release, so on systems where libtool
1426 versioning is not fully checked by the loader an auxiliary mechanism may be
1427 needed to express that a dynamic linked application depends on a new enough
1430 An auxiliary mechanism may also be needed to express that @file{libgmpxx.la}
1431 (from @option{--enable-cxx}, @pxref{Build Options}) requires @file{libgmp.la}
1432 from the same GMP version, since this is not done by the libtool versioning,
1433 nor otherwise. A mismatch will result in unresolved symbols from the linker,
1434 or perhaps the loader.
1436 When building a package for a CPU family, care should be taken to use
1437 @samp{--host} (or @samp{--build}) to choose the least common denominator among
1438 the CPUs which might use the package. For example this might mean plain
1439 @samp{sparc} (meaning V7) for SPARCs.
1441 For x86s, @option{--enable-fat} sets things up for a fat binary build, making a
1442 runtime selection of optimized low level routines. This is a good choice for
1443 packaging to run on a range of x86 chips.
1445 Users who care about speed will want GMP built for their exact CPU type, to
1446 make best use of the available optimizations. Providing a way to suitably
1447 rebuild a package may be useful. This could be as simple as making it
1448 possible for a user to omit @samp{--build} (and @samp{--host}) so
1449 @samp{./config.guess} will detect the CPU@. But a way to manually specify a
1450 @samp{--build} will be wanted for systems where @samp{./config.guess} is
1453 On systems with multiple ABIs, a packaged build will need to decide which
1454 among the choices is to be provided, see @ref{ABI and ISA}. A given run of
1455 @samp{./configure} etc will only build one ABI@. If a second ABI is also
1456 required then a second run of @samp{./configure} etc must be made, starting
1457 from a clean directory tree (@samp{make distclean}).
1459 As noted under ``ABI and ISA'', currently no attempt is made to follow system
1460 conventions for install locations that vary with ABI, such as
1461 @file{/usr/lib/sparcv9} for @samp{ABI=64} as opposed to @file{/usr/lib} for
1462 @samp{ABI=32}. A package build can override @samp{libdir} and other standard
1463 variables as necessary.
1465 Note that @file{gmp.h} is a generated file, and will be architecture and ABI
1466 dependent. When attempting to install two ABIs simultaneously it will be
1467 important that an application compile gets the correct @file{gmp.h} for its
1468 desired ABI@. If compiler include paths don't vary with ABI options then it
1469 might be necessary to create a @file{/usr/include/gmp.h} which tests
1470 preprocessor symbols and chooses the correct actual @file{gmp.h}.
1474 @node Notes for Particular Systems, Known Build Problems, Notes for Package Builds, Installing GMP
1475 @section Notes for Particular Systems
1476 @cindex Build notes for particular systems
1477 @cindex Particular systems
1481 @c This section is more or less meant for notes about performance or about
1482 @c build problems that have been worked around but might leave a user
1483 @c scratching their head. Fun with different ABIs on a system belongs in the
1488 On systems @samp{*-*-aix[34]*} shared libraries are disabled by default, since
1489 some versions of the native @command{ar} fail on the convenience libraries
1490 used. A shared build can be attempted with
1493 ./configure --enable-shared --disable-static
1496 Note that the @samp{--disable-static} is necessary because in a shared build
1497 libtool makes @file{libgmp.a} a symlink to @file{libgmp.so}, apparently for
1498 the benefit of old versions of @command{ld} which only recognise @file{.a},
1499 but unfortunately this is done even if a fully functional @command{ld} is
1504 On systems @samp{arm*-*-*}, versions of GCC up to and including 2.95.3 have a
1505 bug in unsigned division, giving wrong results for some operands. GMP
1506 @samp{./configure} will demand GCC 2.95.4 or later.
1510 Compaq C++ on OSF 5.1 has two flavours of @code{iostream}, a standard one and
1511 an old pre-standard one (see @samp{man iostream_intro}). GMP can only use the
1512 standard one, which unfortunately is not the default but must be selected by
1513 defining @code{__USE_STD_IOSTREAM}. Configure with for instance
1516 ./configure --enable-cxx CPPFLAGS=-D__USE_STD_IOSTREAM
1519 @item Floating Point Mode
1520 @cindex Floating point mode
1521 @cindex Hardware floating point mode
1522 @cindex Precision of hardware floating point
1524 On some systems, the hardware floating point has a control mode which can set
1525 all operations to be done in a particular precision, for instance single,
1526 double or extended on x86 systems (x87 floating point). The GMP functions
1527 involving a @code{double} cannot be expected to operate to their full
1528 precision when the hardware is in single precision mode. Of course this
1529 affects all code, including application code, not just GMP.
1531 @item MS-DOS and MS Windows
1538 On an MS-DOS system DJGPP can be used to build GMP, and on an MS Windows
1539 system Cygwin, DJGPP and MINGW can be used. All three are excellent ports of
1540 GCC and the various GNU tools.
1543 @uref{http://www.cygwin.com/}
1544 @uref{http://www.delorie.com/djgpp/}
1545 @uref{http://www.mingw.org/}
1549 @cindex Services for Unix
1550 Microsoft also publishes an Interix ``Services for Unix'' which can be used to
1551 build GMP on Windows (with a normal @samp{./configure}), but it's not free
1554 @item MS Windows DLLs
1558 On systems @samp{*-*-cygwin*}, @samp{*-*-mingw*} and @samp{*-*-pw32*} by
1559 default GMP builds only a static library, but a DLL can be built instead using
1562 ./configure --disable-static --enable-shared
1565 Static and DLL libraries can't both be built, since certain export directives
1566 in @file{gmp.h} must be different.
1568 A MINGW DLL build of GMP can be used with Microsoft C@. Libtool doesn't
1569 install a @file{.lib} format import library, but it can be created with MS
1570 @command{lib} as follows, and copied to the install directory. Similarly for
1571 @file{libmp} and @file{libgmpxx}.
1575 lib /def:libgmp-3.dll.def /out:libgmp-3.lib
1578 MINGW uses the C runtime library @samp{msvcrt.dll} for I/O, so applications
1579 wanting to use the GMP I/O routines must be compiled with @samp{cl /MD} to do
1580 the same. If one of the other C runtime library choices provided by MS C is
1581 desired then the suggestion is to use the GMP string functions and confine I/O
1584 @item Motorola 68k CPU Types
1586 @samp{m68k} is taken to mean 68000. @samp{m68020} or higher will give a
1587 performance boost on applicable CPUs. @samp{m68360} can be used for CPU32
1588 series chips. @samp{m68302} can be used for ``Dragonball'' series chips,
1589 though this is merely a synonym for @samp{m68000}.
1593 @command{m4} in this release of OpenBSD has a bug in @code{eval} that makes it
1594 unsuitable for @file{.asm} file processing. @samp{./configure} will detect
1595 the problem and either abort or choose another m4 in the @env{PATH}. The bug
1596 is fixed in OpenBSD 2.7, so either upgrade or use GNU m4.
1598 @item Power CPU Types
1599 @cindex Power/PowerPC
1600 In GMP, CPU types @samp{power*} and @samp{powerpc*} will each use instructions
1601 not available on the other, so it's important to choose the right one for the
1602 CPU that will be used. Currently GMP has no assembly code support for using
1603 just the common instruction subset. To get executables that run on both, the
1604 current suggestion is to use the generic C code (CPU @samp{none}), possibly
1605 with appropriate compiler options (like @samp{-mcpu=common} for
1606 @command{gcc}). CPU @samp{rs6000} (which is not a CPU but a family of
1607 workstations) is accepted by @file{config.sub}, but is currently equivalent to
1610 @item Sparc CPU Types
1612 @samp{sparcv8} or @samp{supersparc} on relevant systems will give a
1613 significant performance increase over the V7 code selected by plain
1616 @item Sparc App Regs
1618 The GMP assembly code for both 32-bit and 64-bit Sparc clobbers the
1619 ``application registers'' @code{g2}, @code{g3} and @code{g4}, the same way
1620 that the GCC default @samp{-mapp-regs} does (@pxref{SPARC Options,, SPARC
1621 Options, gcc, Using the GNU Compiler Collection (GCC)}).
1623 This makes that code unsuitable for use with the special V9
1624 @samp{-mcmodel=embmedany} (which uses @code{g4} as a data segment pointer),
1625 and for applications wanting to use those registers for special purposes. In
1626 these cases the only suggestion currently is to build GMP with CPU @samp{none}
1627 to avoid the assembly code.
1631 @command{/usr/bin/m4} lacks various features needed to process @file{.asm}
1632 files, and instead @samp{./configure} will automatically use
1633 @command{/usr/5bin/m4}, which we believe is always available (if not then use
1640 @samp{i586}, @samp{pentium} or @samp{pentiummmx} code is good for its intended
1641 P5 Pentium chips, but quite slow when run on Intel P6 class chips (PPro, P-II,
1642 P-III)@. @samp{i386} is a better choice when making binaries that must run on
1645 @item x86 MMX and SSE2 Code
1648 If the CPU selected has MMX code but the assembler doesn't support it, a
1649 warning is given and non-MMX code is used instead. This will be an inferior
1650 build, since the MMX code that's present is there because it's faster than the
1651 corresponding plain integer code. The same applies to SSE2.
1653 Old versions of @samp{gas} don't support MMX instructions, in particular
1654 version 1.92.3 that comes with FreeBSD 2.2.8 or the more recent OpenBSD 3.1
1657 Solaris 2.6 and 2.7 @command{as} generate incorrect object code for register
1658 to register @code{movq} instructions, and so can't be used for MMX code.
1659 Install a recent @command{gas} if MMX code is wanted on these systems.
1664 @node Known Build Problems, Performance optimization, Notes for Particular Systems, Installing GMP
1665 @section Known Build Problems
1666 @cindex Build problems known
1668 @c This section is more or less meant for known build problems that are not
1669 @c otherwise worked around and require some sort of manual intervention.
1671 You might find more up-to-date information at @uref{http://gmplib.org/}.
1674 @item Compiler link options
1675 The version of libtool currently in use rather aggressively strips compiler
1676 options when linking a shared library. This will hopefully be relaxed in the
1677 future, but for now if this is a problem the suggestion is to create a little
1678 script to hide them, and for instance configure with
1681 ./configure CC=gcc-with-my-options
1684 @item DJGPP (@samp{*-*-msdosdjgpp*})
1686 The DJGPP port of @command{bash} 2.03 is unable to run the @samp{configure}
1687 script, it exits silently, having died writing a preamble to
1688 @file{config.log}. Use @command{bash} 2.04 or higher.
1690 @samp{make all} was found to run out of memory during the final
1691 @file{libgmp.la} link on one system tested, despite having 64Mb available.
1692 Running @samp{make libgmp.la} directly helped, perhaps recursing into the
1693 various subdirectories uses up memory.
1695 @item GNU binutils @command{strip} prior to 2.12
1696 @cindex Stripped libraries
1697 @cindex Binutils @command{strip}
1698 @cindex GNU @command{strip}
1699 @command{strip} from GNU binutils 2.11 and earlier should not be used on the
1700 static libraries @file{libgmp.a} and @file{libmp.a} since it will discard all
1701 but the last of multiple archive members with the same name, like the three
1702 versions of @file{init.o} in @file{libgmp.a}. Binutils 2.12 or higher can be
1705 The shared libraries @file{libgmp.so} and @file{libmp.so} are not affected by
1706 this and any version of @command{strip} can be used on them.
1708 @item @command{make} syntax error
1711 On certain versions of SCO OpenServer 5 and IRIX 6.5 the native @command{make}
1712 is unable to handle the long dependencies list for @file{libgmp.la}. The
1713 symptom is a ``syntax error'' on the following line of the top-level
1717 libgmp.la: $(libgmp_la_OBJECTS) $(libgmp_la_DEPENDENCIES)
1720 Either use GNU Make, or as a workaround remove
1721 @code{$(libgmp_la_DEPENDENCIES)} from that line (which will make the initial
1722 build work, but if any recompiling is done @file{libgmp.la} might not be
1725 @item MacOS X (@samp{*-*-darwin*})
1728 Libtool currently only knows how to create shared libraries on MacOS X using
1729 the native @command{cc} (which is a modified GCC), not a plain GCC@. A
1730 static-only build should work though (@samp{--disable-shared}).
1732 @item NeXT prior to 3.3
1734 The system compiler on old versions of NeXT was a massacred and old GCC, even
1735 if it called itself @file{cc}. This compiler cannot be used to build GMP, you
1736 need to get a real GCC, and install that. (NeXT may have fixed this in
1737 release 3.3 of their system.)
1739 @item POWER and PowerPC
1740 @cindex Power/PowerPC
1741 Bugs in GCC 2.7.2 (and 2.6.3) mean it can't be used to compile GMP on POWER or
1742 PowerPC@. If you want to use GCC for these machines, get GCC 2.7.2.1 (or
1745 @item Sequent Symmetry
1746 @cindex Sequent Symmetry
1747 Use the GNU assembler instead of the system assembler, since the latter has
1752 The system @command{sed} prints an error ``Output line too long'' when libtool
1753 builds @file{libgmp.la}. This doesn't seem to cause any obvious ill effects,
1754 but GNU @command{sed} is recommended, to avoid any doubt.
1756 @item Sparc Solaris 2.7 with gcc 2.95.2 in @samp{ABI=32}
1758 A shared library build of GMP seems to fail in this combination, it builds but
1759 then fails the tests, apparently due to some incorrect data relocations within
1760 @code{gmp_randinit_lc_2exp_size}. The exact cause is unknown,
1761 @samp{--disable-shared} is recommended.
1766 @node Performance optimization, , Known Build Problems, Installing GMP
1767 @section Performance optimization
1768 @cindex Optimizing performance
1770 @c At some point, this should perhaps move to a separate chapter on optimizing
1773 For optimal performance, build GMP for the exact CPU type of the target
1774 computer, see @ref{Build Options}.
1776 Unlike what is the case for most other programs, the compiler typically
1777 doesn't matter much, since GMP uses assembly language for the most critical
1780 In particular for long-running GMP applications, and applications demanding
1781 extremely large numbers, building and running the @code{tuneup} program in the
1782 @file{tune} subdirectory, can be important. For example,
1790 will generate better contents for the @file{gmp-mparam.h} parameter file.
1792 To use the results, put the output in the file file indicated in the
1793 @samp{Parameters for ...} header. Then recompile from scratch.
1795 The @code{tuneup} program takes one useful parameter, @samp{-f NNN}, which
1796 instructs the program how long to check FFT multiply parameters. If you're
1797 going to use GMP for extremely large numbers, you may want to run @code{tuneup}
1798 with a large NNN value.
1801 @node GMP Basics, Reporting Bugs, Installing GMP, Top
1802 @comment node-name, next, previous, up
1806 @strong{Using functions, macros, data types, etc.@: not documented in this
1807 manual is strongly discouraged. If you do so your application is guaranteed
1808 to be incompatible with future versions of GMP.}
1811 * Headers and Libraries::
1812 * Nomenclature and Types::
1813 * Function Classes::
1814 * Variable Conventions::
1815 * Parameter Conventions::
1816 * Memory Management::
1818 * Useful Macros and Constants::
1819 * Compatibility with older versions::
1820 * Demonstration Programs::
1828 @node Headers and Libraries, Nomenclature and Types, GMP Basics, GMP Basics
1829 @section Headers and Libraries
1832 @cindex @file{gmp.h}
1833 @cindex Include files
1834 @cindex @code{#include}
1835 All declarations needed to use GMP are collected in the include file
1836 @file{gmp.h}. It is designed to work with both C and C++ compilers.
1842 @cindex @code{stdio.h}
1843 Note however that prototypes for GMP functions with @code{FILE *} parameters
1844 are only provided if @code{<stdio.h>} is included too.
1851 @cindex @code{stdarg.h}
1852 Likewise @code{<stdarg.h>} (or @code{<varargs.h>}) is required for prototypes
1853 with @code{va_list} parameters, such as @code{gmp_vprintf}. And
1854 @code{<obstack.h>} for prototypes with @code{struct obstack} parameters, such
1855 as @code{gmp_obstack_printf}, when available.
1859 @cindex @code{libgmp}
1860 All programs using GMP must link against the @file{libgmp} library. On a
1861 typical Unix-like system this can be done with @samp{-lgmp}, for example
1864 gcc myprogram.c -lgmp
1867 @cindex @code{libgmpxx}
1868 GMP C++ functions are in a separate @file{libgmpxx} library. This is built
1869 and installed if C++ support has been enabled (@pxref{Build Options}). For
1873 g++ mycxxprog.cc -lgmpxx -lgmp
1877 GMP is built using Libtool and an application can use that to link if desired,
1878 @GMPpxreftop{libtool, GNU Libtool}.
1880 If GMP has been installed to a non-standard location then it may be necessary
1881 to use @samp{-I} and @samp{-L} compiler options to point to the right
1882 directories, and some sort of run-time path for a shared library.
1885 @node Nomenclature and Types, Function Classes, Headers and Libraries, GMP Basics
1886 @section Nomenclature and Types
1887 @cindex Nomenclature
1891 @tindex @code{mpz_t}
1892 In this manual, @dfn{integer} usually means a multiple precision integer, as
1893 defined by the GMP library. The C data type for such integers is @code{mpz_t}.
1894 Here are some examples of how to declare such integers:
1899 struct foo @{ mpz_t x, y; @};
1904 @cindex Rational number
1905 @tindex @code{mpq_t}
1906 @dfn{Rational number} means a multiple precision fraction. The C data type
1907 for these fractions is @code{mpq_t}. For example:
1913 @cindex Floating-point number
1914 @tindex @code{mpf_t}
1915 @dfn{Floating point number} or @dfn{Float} for short, is an arbitrary precision
1916 mantissa with a limited precision exponent. The C data type for such objects
1917 is @code{mpf_t}. For example:
1923 @tindex @code{mp_exp_t}
1924 The floating point functions accept and return exponents in the C type
1925 @code{mp_exp_t}. Currently this is usually a @code{long}, but on some systems
1926 it's an @code{int} for efficiency.
1929 @tindex @code{mp_limb_t}
1930 A @dfn{limb} means the part of a multi-precision number that fits in a single
1931 machine word. (We chose this word because a limb of the human body is
1932 analogous to a digit, only larger, and containing several digits.) Normally a
1933 limb is 32 or 64 bits. The C data type for a limb is @code{mp_limb_t}.
1935 @tindex @code{mp_size_t}
1936 Counts of limbs of a multi-precision number represented in the C type
1937 @code{mp_size_t}. Currently this is normally a @code{long}, but on some
1938 systems it's an @code{int} for efficiency, and on some systems it will be
1939 @code{long long} in the future.
1941 @tindex @code{mp_bitcnt_t}
1942 Counts of bits of a multi-precision number are represented in the C type
1943 @code{mp_bitcnt_t}. Currently this is always an @code{unsigned long}, but on
1944 some systems it will be an @code{unsigned long long} in the future .
1946 @cindex Random state
1947 @tindex @code{gmp_randstate_t}
1948 @dfn{Random state} means an algorithm selection and current state data. The C
1949 data type for such objects is @code{gmp_randstate_t}. For example:
1952 gmp_randstate_t rstate;
1955 Also, in general @code{mp_bitcnt_t} is used for bit counts and ranges, and
1956 @code{size_t} is used for byte or character counts.
1959 @node Function Classes, Variable Conventions, Nomenclature and Types, GMP Basics
1960 @section Function Classes
1961 @cindex Function classes
1963 There are six classes of functions in the GMP library:
1967 Functions for signed integer arithmetic, with names beginning with
1968 @code{mpz_}. The associated type is @code{mpz_t}. There are about 150
1969 functions in this class. (@pxref{Integer Functions})
1972 Functions for rational number arithmetic, with names beginning with
1973 @code{mpq_}. The associated type is @code{mpq_t}. There are about 40
1974 functions in this class, but the integer functions can be used for arithmetic
1975 on the numerator and denominator separately. (@pxref{Rational Number
1979 Functions for floating-point arithmetic, with names beginning with
1980 @code{mpf_}. The associated type is @code{mpf_t}. There are about 60
1981 functions is this class. (@pxref{Floating-point Functions})
1984 Functions compatible with Berkeley MP, such as @code{itom}, @code{madd}, and
1985 @code{mult}. The associated type is @code{MINT}. (@pxref{BSD Compatible
1989 Fast low-level functions that operate on natural numbers. These are used by
1990 the functions in the preceding groups, and you can also call them directly
1991 from very time-critical user programs. These functions' names begin with
1992 @code{mpn_}. The associated type is array of @code{mp_limb_t}. There are
1993 about 30 (hard-to-use) functions in this class. (@pxref{Low-level Functions})
1996 Miscellaneous functions. Functions for setting up custom allocation and
1997 functions for generating random numbers. (@pxref{Custom Allocation}, and
1998 @pxref{Random Number Functions})
2002 @node Variable Conventions, Parameter Conventions, Function Classes, GMP Basics
2003 @section Variable Conventions
2004 @cindex Variable conventions
2005 @cindex Conventions for variables
2007 GMP functions generally have output arguments before input arguments. This
2008 notation is by analogy with the assignment operator. The BSD MP compatibility
2009 functions are exceptions, having the output arguments last.
2011 GMP lets you use the same variable for both input and output in one call. For
2012 example, the main function for integer multiplication, @code{mpz_mul}, can be
2013 used to square @code{x} and put the result back in @code{x} with
2019 Before you can assign to a GMP variable, you need to initialize it by calling
2020 one of the special initialization functions. When you're done with a
2021 variable, you need to clear it out, using one of the functions for that
2022 purpose. Which function to use depends on the type of variable. See the
2023 chapters on integer functions, rational number functions, and floating-point
2024 functions for details.
2026 A variable should only be initialized once, or at least cleared between each
2027 initialization. After a variable has been initialized, it may be assigned to
2028 any number of times.
2030 For efficiency reasons, avoid excessive initializing and clearing. In
2031 general, initialize near the start of a function and clear near the end. For
2041 for (i = 1; i < 100; i++)
2043 mpz_mul (n, @dots{});
2044 mpz_fdiv_q (n, @dots{});
2052 @node Parameter Conventions, Memory Management, Variable Conventions, GMP Basics
2053 @section Parameter Conventions
2054 @cindex Parameter conventions
2055 @cindex Conventions for parameters
2057 When a GMP variable is used as a function parameter, it's effectively a
2058 call-by-reference, meaning if the function stores a value there it will change
2059 the original in the caller. Parameters which are input-only can be designated
2060 @code{const} to provoke a compiler error or warning on attempting to modify
2063 When a function is going to return a GMP result, it should designate a
2064 parameter that it sets, like the library functions do. More than one value
2065 can be returned by having more than one output parameter, again like the
2066 library functions. A @code{return} of an @code{mpz_t} etc doesn't return the
2067 object, only a pointer, and this is almost certainly not what's wanted.
2069 Here's an example accepting an @code{mpz_t} parameter, doing a calculation,
2070 and storing the result to the indicated parameter.
2074 foo (mpz_t result, const mpz_t param, unsigned long n)
2077 mpz_mul_ui (result, param, n);
2078 for (i = 1; i < n; i++)
2079 mpz_add_ui (result, result, i*7);
2087 mpz_init_set_str (n, "123456", 0);
2089 gmp_printf ("%Zd\n", r);
2094 @code{foo} works even if the mainline passes the same variable for
2095 @code{param} and @code{result}, just like the library functions. But
2096 sometimes it's tricky to make that work, and an application might not want to
2097 bother supporting that sort of thing.
2099 For interest, the GMP types @code{mpz_t} etc are implemented as one-element
2100 arrays of certain structures. This is why declaring a variable creates an
2101 object with the fields GMP needs, but then using it as a parameter passes a
2102 pointer to the object. Note that the actual fields in each @code{mpz_t} etc
2103 are for internal use only and should not be accessed directly by code that
2104 expects to be compatible with future GMP releases.
2108 @node Memory Management, Reentrancy, Parameter Conventions, GMP Basics
2109 @section Memory Management
2110 @cindex Memory management
2112 The GMP types like @code{mpz_t} are small, containing only a couple of sizes,
2113 and pointers to allocated data. Once a variable is initialized, GMP takes
2114 care of all space allocation. Additional space is allocated whenever a
2115 variable doesn't have enough.
2117 @code{mpz_t} and @code{mpq_t} variables never reduce their allocated space.
2118 Normally this is the best policy, since it avoids frequent reallocation.
2119 Applications that need to return memory to the heap at some particular point
2120 can use @code{mpz_realloc2}, or clear variables no longer needed.
2122 @code{mpf_t} variables, in the current implementation, use a fixed amount of
2123 space, determined by the chosen precision and allocated at initialization, so
2124 their size doesn't change.
2126 All memory is allocated using @code{malloc} and friends by default, but this
2127 can be changed, see @ref{Custom Allocation}. Temporary memory on the stack is
2128 also used (via @code{alloca}), but this can be changed at build-time if
2129 desired, see @ref{Build Options}.
2132 @node Reentrancy, Useful Macros and Constants, Memory Management, GMP Basics
2135 @cindex Thread safety
2136 @cindex Multi-threading
2139 GMP is reentrant and thread-safe, with some exceptions:
2143 If configured with @option{--enable-alloca=malloc-notreentrant} (or with
2144 @option{--enable-alloca=notreentrant} when @code{alloca} is not available),
2145 then naturally GMP is not reentrant.
2148 @code{mpf_set_default_prec} and @code{mpf_init} use a global variable for the
2149 selected precision. @code{mpf_init2} can be used instead, and in the C++
2150 interface an explicit precision to the @code{mpf_class} constructor.
2153 @code{mpz_random} and the other old random number functions use a global
2154 random state and are hence not reentrant. The newer random number functions
2155 that accept a @code{gmp_randstate_t} parameter can be used instead.
2158 @code{gmp_randinit} (obsolete) returns an error indication through a global
2159 variable, which is not thread safe. Applications are advised to use
2160 @code{gmp_randinit_default} or @code{gmp_randinit_lc_2exp} instead.
2163 @code{mp_set_memory_functions} uses global variables to store the selected
2164 memory allocation functions.
2167 If the memory allocation functions set by a call to
2168 @code{mp_set_memory_functions} (or @code{malloc} and friends by default) are
2169 not reentrant, then GMP will not be reentrant either.
2172 If the standard I/O functions such as @code{fwrite} are not reentrant then the
2173 GMP I/O functions using them will not be reentrant either.
2176 It's safe for two threads to read from the same GMP variable simultaneously,
2177 but it's not safe for one to read while the another might be writing, nor for
2178 two threads to write simultaneously. It's not safe for two threads to
2179 generate a random number from the same @code{gmp_randstate_t} simultaneously,
2180 since this involves an update of that variable.
2185 @node Useful Macros and Constants, Compatibility with older versions, Reentrancy, GMP Basics
2186 @section Useful Macros and Constants
2187 @cindex Useful macros and constants
2190 @deftypevr {Global Constant} {const int} mp_bits_per_limb
2191 @findex mp_bits_per_limb
2192 @cindex Bits per limb
2194 The number of bits per limb.
2197 @defmac __GNU_MP_VERSION
2198 @defmacx __GNU_MP_VERSION_MINOR
2199 @defmacx __GNU_MP_VERSION_PATCHLEVEL
2200 @cindex Version number
2201 @cindex GMP version number
2202 The major and minor GMP version, and patch level, respectively, as integers.
2203 For GMP i.j, these numbers will be i, j, and 0, respectively.
2204 For GMP i.j.k, these numbers will be i, j, and k, respectively.
2207 @deftypevr {Global Constant} {const char * const} gmp_version
2209 The GMP version number, as a null-terminated string, in the form ``i.j.k''.
2210 This release is @nicode{"@value{VERSION}"}. Note that the format ``i.j'' was
2211 used when k was zero was used before version 4.3.0.
2215 @defmacx __GMP_CFLAGS
2216 The compiler and compiler flags, respectively, used when compiling GMP, as
2221 @node Compatibility with older versions, Demonstration Programs, Useful Macros and Constants, GMP Basics
2222 @section Compatibility with older versions
2223 @cindex Compatibility with older versions
2224 @cindex Past GMP versions
2225 @cindex Upward compatibility
2227 This version of GMP is upwardly binary compatible with all 4.x and 3.x
2228 versions, and upwardly compatible at the source level with all 2.x versions,
2229 with the following exceptions.
2233 @code{mpn_gcd} had its source arguments swapped as of GMP 3.0, for consistency
2234 with other @code{mpn} functions.
2237 @code{mpf_get_prec} counted precision slightly differently in GMP 3.0 and
2238 3.0.1, but in 3.1 reverted to the 2.x style.
2241 There are a number of compatibility issues between GMP 1 and GMP 2 that of
2242 course also apply when porting applications from GMP 1 to GMP 4. Please
2243 see the GMP 2 manual for details.
2245 The Berkeley MP compatibility library (@pxref{BSD Compatible Functions}) is
2246 source and binary compatible with the standard @file{libmp}.
2249 @c @item Integer division functions round the result differently. The obsolete
2250 @c functions (@code{mpz_div}, @code{mpz_divmod}, @code{mpz_mdiv},
2251 @c @code{mpz_mdivmod}, etc) now all use floor rounding (i.e., they round the
2254 @c @minus{}infinity).
2261 @c There are a lot of functions for integer division, giving the user better
2262 @c control over the rounding.
2264 @c @item The function @code{mpz_mod} now compute the true @strong{mod} function.
2266 @c @item The functions @code{mpz_powm} and @code{mpz_powm_ui} now use
2267 @c @strong{mod} for reduction.
2269 @c @item The assignment functions for rational numbers do no longer canonicalize
2270 @c their results. In the case a non-canonical result could arise from an
2271 @c assignment, the user need to insert an explicit call to
2272 @c @code{mpq_canonicalize}. This change was made for efficiency.
2274 @c @item Output generated by @code{mpz_out_raw} in this release cannot be read
2275 @c by @code{mpz_inp_raw} in previous releases. This change was made for making
2276 @c the file format truly portable between machines with different word sizes.
2278 @c @item Several @code{mpn} functions have changed. But they were intentionally
2279 @c undocumented in previous releases.
2281 @c @item The functions @code{mpz_cmp_ui}, @code{mpz_cmp_si}, and @code{mpq_cmp_ui}
2282 @c are now implemented as macros, and thereby sometimes evaluate their
2283 @c arguments multiple times.
2285 @c @item The functions @code{mpz_pow_ui} and @code{mpz_ui_pow_ui} now yield 1
2286 @c for 0^0. (In version 1, they yielded 0.)
2288 @c In version 1 of the library, @code{mpq_set_den} handled negative
2289 @c denominators by copying the sign to the numerator. That is no longer done.
2291 @c Pure assignment functions do not canonicalize the assigned variable. It is
2292 @c the responsibility of the user to canonicalize the assigned variable before
2293 @c any arithmetic operations are performed on that variable.
2294 @c Note that this is an incompatible change from version 1 of the library.
2300 @node Demonstration Programs, Efficiency, Compatibility with older versions, GMP Basics
2301 @section Demonstration programs
2302 @cindex Demonstration programs
2303 @cindex Example programs
2304 @cindex Sample programs
2305 The @file{demos} subdirectory has some sample programs using GMP@. These
2306 aren't built or installed, but there's a @file{Makefile} with rules for them.
2315 The following programs are provided
2319 @cindex Expression parsing demo
2320 @cindex Parsing expressions demo
2321 @samp{pexpr} is an expression evaluator, the program used on the GMP web page.
2323 @cindex Expression parsing demo
2324 @cindex Parsing expressions demo
2325 The @samp{calc} subdirectory has a similar but simpler evaluator using
2326 @command{lex} and @command{yacc}.
2328 @cindex Expression parsing demo
2329 @cindex Parsing expressions demo
2330 The @samp{expr} subdirectory is yet another expression evaluator, a library
2331 designed for ease of use within a C program. See @file{demos/expr/README} for
2334 @cindex Factorization demo
2335 @samp{factorize} is a Pollard-Rho factorization program.
2337 @samp{isprime} is a command-line interface to the @code{mpz_probab_prime_p}
2340 @samp{primes} counts or lists primes in an interval, using a sieve.
2342 @samp{qcn} is an example use of @code{mpz_kronecker_ui} to estimate quadratic
2346 @cindex GMP Perl module
2348 The @samp{perl} subdirectory is a comprehensive perl interface to GMP@. See
2349 @file{demos/perl/INSTALL} for more information. Documentation is in POD
2350 format in @file{demos/perl/GMP.pm}.
2353 As an aside, consideration has been given at various times to some sort of
2354 expression evaluation within the main GMP library. Going beyond something
2355 minimal quickly leads to matters like user-defined functions, looping, fixnums
2356 for control variables, etc, which are considered outside the scope of GMP
2357 (much closer to language interpreters or compilers, @xref{Language Bindings}.)
2358 Something simple for program input convenience may yet be a possibility, a
2359 combination of the @file{expr} demo and the @file{pexpr} tree back-end
2360 perhaps. But for now the above evaluators are offered as illustrations.
2364 @node Efficiency, Debugging, Demonstration Programs, GMP Basics
2369 @item Small Operands
2370 @cindex Small operands
2371 On small operands, the time for function call overheads and memory allocation
2372 can be significant in comparison to actual calculation. This is unavoidable
2373 in a general purpose variable precision library, although GMP attempts to be
2374 as efficient as it can on both large and small operands.
2376 @item Static Linking
2377 @cindex Static linking
2378 On some CPUs, in particular the x86s, the static @file{libgmp.a} should be
2379 used for maximum speed, since the PIC code in the shared @file{libgmp.so} will
2380 have a small overhead on each function call and global data address. For many
2381 programs this will be insignificant, but for long calculations there's a gain
2384 @item Initializing and Clearing
2385 @cindex Initializing and clearing
2386 Avoid excessive initializing and clearing of variables, since this can be
2387 quite time consuming, especially in comparison to otherwise fast operations
2390 A language interpreter might want to keep a free list or stack of
2391 initialized variables ready for use. It should be possible to integrate
2392 something like that with a garbage collector too.
2395 @cindex Reallocations
2396 An @code{mpz_t} or @code{mpq_t} variable used to hold successively increasing
2397 values will have its memory repeatedly @code{realloc}ed, which could be quite
2398 slow or could fragment memory, depending on the C library. If an application
2399 can estimate the final size then @code{mpz_init2} or @code{mpz_realloc2} can
2400 be called to allocate the necessary space from the beginning
2401 (@pxref{Initializing Integers}).
2403 It doesn't matter if a size set with @code{mpz_init2} or @code{mpz_realloc2}
2404 is too small, since all functions will do a further reallocation if necessary.
2405 Badly overestimating memory required will waste space though.
2407 @item @code{2exp} Functions
2408 @cindex @code{2exp} functions
2409 It's up to an application to call functions like @code{mpz_mul_2exp} when
2410 appropriate. General purpose functions like @code{mpz_mul} make no attempt to
2411 identify powers of two or other special forms, because such inputs will
2412 usually be very rare and testing every time would be wasteful.
2414 @item @code{ui} and @code{si} Functions
2415 @cindex @code{ui} and @code{si} functions
2416 The @code{ui} functions and the small number of @code{si} functions exist for
2417 convenience and should be used where applicable. But if for example an
2418 @code{mpz_t} contains a value that fits in an @code{unsigned long} there's no
2419 need extract it and call a @code{ui} function, just use the regular @code{mpz}
2422 @item In-Place Operations
2423 @cindex In-place operations
2424 @code{mpz_abs}, @code{mpq_abs}, @code{mpf_abs}, @code{mpz_neg}, @code{mpq_neg}
2425 and @code{mpf_neg} are fast when used for in-place operations like
2426 @code{mpz_abs(x,x)}, since in the current implementation only a single field
2427 of @code{x} needs changing. On suitable compilers (GCC for instance) this is
2430 @code{mpz_add_ui}, @code{mpz_sub_ui}, @code{mpf_add_ui} and @code{mpf_sub_ui}
2431 benefit from an in-place operation like @code{mpz_add_ui(x,x,y)}, since
2432 usually only one or two limbs of @code{x} will need to be changed. The same
2433 applies to the full precision @code{mpz_add} etc if @code{y} is small. If
2434 @code{y} is big then cache locality may be helped, but that's all.
2436 @code{mpz_mul} is currently the opposite, a separate destination is slightly
2437 better. A call like @code{mpz_mul(x,x,y)} will, unless @code{y} is only one
2438 limb, make a temporary copy of @code{x} before forming the result. Normally
2439 that copying will only be a tiny fraction of the time for the multiply, so
2440 this is not a particularly important consideration.
2442 @code{mpz_set}, @code{mpq_set}, @code{mpq_set_num}, @code{mpf_set}, etc, make
2443 no attempt to recognise a copy of something to itself, so a call like
2444 @code{mpz_set(x,x)} will be wasteful. Naturally that would never be written
2445 deliberately, but if it might arise from two pointers to the same object then
2446 a test to avoid it might be desirable.
2453 Note that it's never worth introducing extra @code{mpz_set} calls just to get
2454 in-place operations. If a result should go to a particular variable then just
2455 direct it there and let GMP take care of data movement.
2457 @item Divisibility Testing (Small Integers)
2458 @cindex Divisibility testing
2459 @code{mpz_divisible_ui_p} and @code{mpz_congruent_ui_p} are the best functions
2460 for testing whether an @code{mpz_t} is divisible by an individual small
2461 integer. They use an algorithm which is faster than @code{mpz_tdiv_ui}, but
2462 which gives no useful information about the actual remainder, only whether
2463 it's zero (or a particular value).
2465 However when testing divisibility by several small integers, it's best to take
2466 a remainder modulo their product, to save multi-precision operations. For
2467 instance to test whether a number is divisible by any of 23, 29 or 31 take a
2468 remainder modulo @math{23@times{}29@times{}31 = 20677} and then test that.
2470 The division functions like @code{mpz_tdiv_q_ui} which give a quotient as well
2471 as a remainder are generally a little slower than the remainder-only functions
2472 like @code{mpz_tdiv_ui}. If the quotient is only rarely wanted then it's
2473 probably best to just take a remainder and then go back and calculate the
2474 quotient if and when it's wanted (@code{mpz_divexact_ui} can be used if the
2477 @item Rational Arithmetic
2478 @cindex Rational arithmetic
2479 The @code{mpq} functions operate on @code{mpq_t} values with no common factors
2480 in the numerator and denominator. Common factors are checked-for and cast out
2481 as necessary. In general, cancelling factors every time is the best approach
2482 since it minimizes the sizes for subsequent operations.
2484 However, applications that know something about the factorization of the
2485 values they're working with might be able to avoid some of the GCDs used for
2486 canonicalization, or swap them for divisions. For example when multiplying by
2487 a prime it's enough to check for factors of it in the denominator instead of
2488 doing a full GCD@. Or when forming a big product it might be known that very
2489 little cancellation will be possible, and so canonicalization can be left to
2492 The @code{mpq_numref} and @code{mpq_denref} macros give access to the
2493 numerator and denominator to do things outside the scope of the supplied
2494 @code{mpq} functions. @xref{Applying Integer Functions}.
2496 The canonical form for rationals allows mixed-type @code{mpq_t} and integer
2497 additions or subtractions to be done directly with multiples of the
2498 denominator. This will be somewhat faster than @code{mpq_add}. For example,
2502 mpz_add (mpq_numref(q), mpq_numref(q), mpq_denref(q));
2504 /* mpq += unsigned long */
2505 mpz_addmul_ui (mpq_numref(q), mpq_denref(q), 123UL);
2508 mpz_submul (mpq_numref(q), mpq_denref(q), z);
2511 @item Number Sequences
2512 @cindex Number sequences
2513 Functions like @code{mpz_fac_ui}, @code{mpz_fib_ui} and @code{mpz_bin_uiui}
2514 are designed for calculating isolated values. If a range of values is wanted
2515 it's probably best to call to get a starting point and iterate from there.
2517 @item Text Input/Output
2518 @cindex Text input/output
2519 Hexadecimal or octal are suggested for input or output in text form.
2520 Power-of-2 bases like these can be converted much more efficiently than other
2521 bases, like decimal. For big numbers there's usually nothing of particular
2522 interest to be seen in the digits, so the base doesn't matter much.
2524 Maybe we can hope octal will one day become the normal base for everyday use,
2525 as proposed by King Charles XII of Sweden and later reformers.
2526 @c Reference: Knuth volume 2 section 4.1, page 184 of second edition. :-)
2530 @node Debugging, Profiling, Efficiency, GMP Basics
2535 @item Stack Overflow
2536 @cindex Stack overflow
2537 @cindex Segmentation violation
2539 Depending on the system, a segmentation violation or bus error might be the
2540 only indication of stack overflow. See @samp{--enable-alloca} choices in
2541 @ref{Build Options}, for how to address this.
2543 In new enough versions of GCC, @samp{-fstack-check} may be able to ensure an
2544 overflow is recognised by the system before too much damage is done, or
2545 @samp{-fstack-limit-symbol} or @samp{-fstack-limit-register} may be able to
2546 add checking if the system itself doesn't do any (@pxref{Code Gen Options,,
2547 Options for Code Generation, gcc, Using the GNU Compiler Collection (GCC)}).
2548 These options must be added to the @samp{CFLAGS} used in the GMP build
2549 (@pxref{Build Options}), adding them just to an application will have no
2550 effect. Note also they're a slowdown, adding overhead to each function call
2551 and each stack allocation.
2554 @cindex Heap problems
2555 @cindex Malloc problems
2556 The most likely cause of application problems with GMP is heap corruption.
2557 Failing to @code{init} GMP variables will have unpredictable effects, and
2558 corruption arising elsewhere in a program may well affect GMP@. Initializing
2559 GMP variables more than once or failing to clear them will cause memory leaks.
2561 @cindex Malloc debugger
2562 In all such cases a @code{malloc} debugger is recommended. On a GNU or BSD
2563 system the standard C library @code{malloc} has some diagnostic facilities,
2564 see @ref{Allocation Debugging,, Allocation Debugging, libc, The GNU C Library
2565 Reference Manual}, or @samp{man 3 malloc}. Other possibilities, in no
2566 particular order, include
2569 @uref{http://www.inf.ethz.ch/personal/biere/projects/ccmalloc/}
2570 @uref{http://dmalloc.com/}
2571 @uref{http://www.perens.com/FreeSoftware/} @ (electric fence)
2572 @uref{http://packages.debian.org/stable/devel/fda}
2573 @uref{http://www.gnupdate.org/components/leakbug/}
2574 @uref{http://people.redhat.com/~otaylor/memprof/}
2575 @uref{http://www.cbmamiga.demon.co.uk/mpatrol/}
2578 The GMP default allocation routines in @file{memory.c} also have a simple
2579 sentinel scheme which can be enabled with @code{#define DEBUG} in that file.
2580 This is mainly designed for detecting buffer overruns during GMP development,
2581 but might find other uses.
2583 @item Stack Backtraces
2584 @cindex Stack backtrace
2585 On some systems the compiler options GMP uses by default can interfere with
2586 debugging. In particular on x86 and 68k systems @samp{-fomit-frame-pointer}
2587 is used and this generally inhibits stack backtracing. Recompiling without
2588 such options may help while debugging, though the usual caveats about it
2589 potentially moving a memory problem or hiding a compiler bug will apply.
2591 @item GDB, the GNU Debugger
2593 @cindex GNU Debugger
2594 A sample @file{.gdbinit} is included in the distribution, showing how to call
2595 some undocumented dump functions to print GMP variables from within GDB@. Note
2596 that these functions shouldn't be used in final application code since they're
2597 undocumented and may be subject to incompatible changes in future versions of
2600 @item Source File Paths
2601 GMP has multiple source files with the same name, in different directories.
2602 For example @file{mpz}, @file{mpq} and @file{mpf} each have an
2603 @file{init.c}. If the debugger can't already determine the right one it may
2604 help to build with absolute paths on each C file. One way to do that is to
2605 use a separate object directory with an absolute path to the source directory.
2609 /my/source/dir/gmp-@value{VERSION}/configure
2612 This works via @code{VPATH}, and might require GNU @command{make}.
2613 Alternately it might be possible to change the @code{.c.lo} rules
2616 @item Assertion Checking
2617 @cindex Assertion checking
2618 The build option @option{--enable-assert} is available to add some consistency
2619 checks to the library (see @ref{Build Options}). These are likely to be of
2620 limited value to most applications. Assertion failures are just as likely to
2621 indicate memory corruption as a library or compiler bug.
2623 Applications using the low-level @code{mpn} functions, however, will benefit
2624 from @option{--enable-assert} since it adds checks on the parameters of most
2625 such functions, many of which have subtle restrictions on their usage. Note
2626 however that only the generic C code has checks, not the assembly code, so
2627 CPU @samp{none} should be used for maximum checking.
2629 @item Temporary Memory Checking
2630 The build option @option{--enable-alloca=debug} arranges that each block of
2631 temporary memory in GMP is allocated with a separate call to @code{malloc} (or
2632 the allocation function set with @code{mp_set_memory_functions}).
2634 This can help a malloc debugger detect accesses outside the intended bounds,
2635 or detect memory not released. In a normal build, on the other hand,
2636 temporary memory is allocated in blocks which GMP divides up for its own use,
2637 or may be allocated with a compiler builtin @code{alloca} which will go
2638 nowhere near any malloc debugger hooks.
2640 @item Maximum Debuggability
2641 To summarize the above, a GMP build for maximum debuggability would be
2644 ./configure --disable-shared --enable-assert \
2645 --enable-alloca=debug --host=none CFLAGS=-g
2648 For C++, add @samp{--enable-cxx CXXFLAGS=-g}.
2653 The GCC checker (@uref{http://savannah.nongnu.org/projects/checker/}) can be
2654 used with GMP@. It contains a stub library which means GMP applications
2655 compiled with checker can use a normal GMP build.
2657 A build of GMP with checking within GMP itself can be made. This will run
2658 very very slowly. On GNU/Linux for example,
2660 @cindex @command{checkergcc}
2662 ./configure --host=none-pc-linux-gnu CC=checkergcc
2665 @samp{--host=none} must be used, since the GMP assembly code doesn't support
2666 the checking scheme. The GMP C++ features cannot be used, since current
2667 versions of checker (0.9.9.1) don't yet support the standard C++ library.
2671 The valgrind program (@uref{http://valgrind.org/}) is a memory
2672 checker for x86s. It translates and emulates machine instructions to do
2673 strong checks for uninitialized data (at the level of individual bits), memory
2674 accesses through bad pointers, and memory leaks.
2676 Recent versions of Valgrind are getting support for MMX and SSE/SSE2
2677 instructions, for past versions GMP will need to be configured not to use
2678 those, ie.@: for an x86 without them (for instance plain @samp{i486}).
2680 @item Other Problems
2681 Any suspected bug in GMP itself should be isolated to make sure it's not an
2682 application problem, see @ref{Reporting Bugs}.
2686 @node Profiling, Autoconf, Debugging, GMP Basics
2689 @cindex Execution profiling
2690 @cindex @code{--enable-profiling}
2692 Running a program under a profiler is a good way to find where it's spending
2693 most time and where improvements can be best sought. The profiling choices
2694 for a GMP build are as follows.
2697 @item @samp{--disable-profiling}
2698 The default is to add nothing special for profiling.
2700 It should be possible to just compile the mainline of a program with @code{-p}
2701 and use @command{prof} to get a profile consisting of timer-based sampling of
2702 the program counter. Most of the GMP assembly code has the necessary symbol
2705 This approach has the advantage of minimizing interference with normal program
2706 operation, but on most systems the resolution of the sampling is quite low (10
2707 milliseconds for instance), requiring long runs to get accurate information.
2709 @item @samp{--enable-profiling=prof}
2711 Build with support for the system @command{prof}, which means @samp{-p} added
2712 to the @samp{CFLAGS}.
2714 This provides call counting in addition to program counter sampling, which
2715 allows the most frequently called routines to be identified, and an average
2716 time spent in each routine to be determined.
2718 The x86 assembly code has support for this option, but on other processors
2719 the assembly routines will be as if compiled without @samp{-p} and therefore
2720 won't appear in the call counts.
2722 On some systems, such as GNU/Linux, @samp{-p} in fact means @samp{-pg} and in
2723 this case @samp{--enable-profiling=gprof} described below should be used
2726 @item @samp{--enable-profiling=gprof}
2727 @cindex @code{gprof}
2728 Build with support for @command{gprof}, which means @samp{-pg} added to the
2731 This provides call graph construction in addition to call counting and program
2732 counter sampling, which makes it possible to count calls coming from different
2733 locations. For example the number of calls to @code{mpn_mul} from
2734 @code{mpz_mul} versus the number from @code{mpf_mul}. The program counter
2735 sampling is still flat though, so only a total time in @code{mpn_mul} would be
2736 accumulated, not a separate amount for each call site.
2738 The x86 assembly code has support for this option, but on other processors
2739 the assembly routines will be as if compiled without @samp{-pg} and therefore
2740 not be included in the call counts.
2742 On x86 and m68k systems @samp{-pg} and @samp{-fomit-frame-pointer} are
2743 incompatible, so the latter is omitted from the default flags in that case,
2744 which might result in poorer code generation.
2746 Incidentally, it should be possible to use the @command{gprof} program with a
2747 plain @samp{--enable-profiling=prof} build. But in that case only the
2748 @samp{gprof -p} flat profile and call counts can be expected to be valid, not
2749 the @samp{gprof -q} call graph.
2751 @item @samp{--enable-profiling=instrument}
2752 @cindex @code{-finstrument-functions}
2753 @cindex @code{instrument-functions}
2754 Build with the GCC option @samp{-finstrument-functions} added to the
2755 @samp{CFLAGS} (@pxref{Code Gen Options,, Options for Code Generation, gcc,
2756 Using the GNU Compiler Collection (GCC)}).
2758 This inserts special instrumenting calls at the start and end of each
2759 function, allowing exact timing and full call graph construction.
2761 This instrumenting is not normally a standard system feature and will require
2762 support from an external library, such as
2764 @cindex FunctionCheck
2767 @uref{http://sourceforge.net/projects/fnccheck/}
2770 This should be included in @samp{LIBS} during the GMP configure so that test
2771 programs will link. For example,
2774 ./configure --enable-profiling=instrument LIBS=-lfc
2777 On a GNU system the C library provides dummy instrumenting functions, so
2778 programs compiled with this option will link. In this case it's only
2779 necessary to ensure the correct library is added when linking an application.
2781 The x86 assembly code supports this option, but on other processors the
2782 assembly routines will be as if compiled without
2783 @samp{-finstrument-functions} meaning time spent in them will effectively be
2784 attributed to their caller.
2788 @node Autoconf, Emacs, Profiling, GMP Basics
2792 Autoconf based applications can easily check whether GMP is installed. The
2793 only thing to be noted is that GMP library symbols from version 3 onwards have
2794 prefixes like @code{__gmpz}. The following therefore would be a simple test,
2796 @cindex @code{AC_CHECK_LIB}
2798 AC_CHECK_LIB(gmp, __gmpz_init)
2801 This just uses the default @code{AC_CHECK_LIB} actions for found or not found,
2802 but an application that must have GMP would want to generate an error if not
2806 AC_CHECK_LIB(gmp, __gmpz_init, ,
2807 [AC_MSG_ERROR([GNU MP not found, see http://gmplib.org/])])
2810 If functions added in some particular version of GMP are required, then one of
2811 those can be used when checking. For example @code{mpz_mul_si} was added in
2815 AC_CHECK_LIB(gmp, __gmpz_mul_si, ,
2817 [GNU MP not found, or not 3.1 or up, see http://gmplib.org/])])
2820 An alternative would be to test the version number in @file{gmp.h} using say
2821 @code{AC_EGREP_CPP}. That would make it possible to test the exact version,
2822 if some particular sub-minor release is known to be necessary.
2824 In general it's recommended that applications should simply demand a new
2825 enough GMP rather than trying to provide supplements for features not
2826 available in past versions.
2828 Occasionally an application will need or want to know the size of a type at
2829 configuration or preprocessing time, not just with @code{sizeof} in the code.
2830 This can be done in the normal way with @code{mp_limb_t} etc, but GMP 4.0 or
2831 up is best for this, since prior versions needed certain @samp{-D} defines on
2832 systems using a @code{long long} limb. The following would suit Autoconf 2.50
2836 AC_CHECK_SIZEOF(mp_limb_t, , [#include <gmp.h>])
2840 @node Emacs, , Autoconf, GMP Basics
2843 @cindex @code{info-lookup-symbol}
2845 @key{C-h C-i} (@code{info-lookup-symbol}) is a good way to find documentation
2846 on C functions while editing (@pxref{Info Lookup, , Info Documentation Lookup,
2847 emacs, The Emacs Editor}).
2849 The GMP manual can be included in such lookups by putting the following in
2852 @c This isn't pretty, but there doesn't seem to be a better way (in emacs
2853 @c 21.2 at least). info-lookup->mode-value could be used for the "assoc"s,
2854 @c but that function isn't documented, whereas info-lookup-alist is.
2857 (eval-after-load "info-look"
2858 '(let ((mode-value (assoc 'c-mode (assoc 'symbol info-lookup-alist))))
2859 (setcar (nthcdr 3 mode-value)
2860 (cons '("(gmp)Function Index" nil "^ -.* " "\\>")
2861 (nth 3 mode-value)))))
2865 @node Reporting Bugs, Integer Functions, GMP Basics, Top
2866 @comment node-name, next, previous, up
2867 @chapter Reporting Bugs
2868 @cindex Reporting bugs
2869 @cindex Bug reporting
2871 If you think you have found a bug in the GMP library, please investigate it
2872 and report it. We have made this library available to you, and it is not too
2873 much to ask you to report the bugs you find.
2875 Before you report a bug, check it's not already addressed in @ref{Known Build
2876 Problems}, or perhaps @ref{Notes for Particular Systems}. You may also want
2877 to check @uref{http://gmplib.org/} for patches for this release.
2879 Please include the following in any report,
2883 The GMP version number, and if pre-packaged or patched then say so.
2886 A test program that makes it possible for us to reproduce the bug. Include
2887 instructions on how to run the program.
2890 A description of what is wrong. If the results are incorrect, in what way.
2891 If you get a crash, say so.
2894 If you get a crash, include a stack backtrace from the debugger if it's
2895 informative (@samp{where} in @command{gdb}, or @samp{$C} in @command{adb}).
2898 Please do not send core dumps, executables or @command{strace}s.
2901 The configuration options you used when building GMP, if any.
2904 The name of the compiler and its version. For @command{gcc}, get the version
2905 with @samp{gcc -v}, otherwise perhaps @samp{what `which cc`}, or similar.
2908 The output from running @samp{uname -a}.
2911 The output from running @samp{./config.guess}, and from running
2912 @samp{./configfsf.guess} (might be the same).
2915 If the bug is related to @samp{configure}, then the compressed contents of
2919 If the bug is related to an @file{asm} file not assembling, then the contents
2920 of @file{config.m4} and the offending line or lines from the temporary
2921 @file{mpn/tmp-<file>.s}.
2924 Please make an effort to produce a self-contained report, with something
2925 definite that can be tested or debugged. Vague queries or piecemeal messages
2926 are difficult to act on and don't help the development effort.
2928 It is not uncommon that an observed problem is actually due to a bug in the
2929 compiler; the GMP code tends to explore interesting corners in compilers.
2931 If your bug report is good, we will do our best to help you get a corrected
2932 version of the library; if the bug report is poor, we won't do anything about
2933 it (except maybe ask you to send a better report).
2935 Send your report to: @email{gmp-bugs@@gmplib.org}.
2937 If you think something in this manual is unclear, or downright incorrect, or if
2938 the language needs to be improved, please send a note to the same address.
2941 @node Integer Functions, Rational Number Functions, Reporting Bugs, Top
2942 @comment node-name, next, previous, up
2943 @chapter Integer Functions
2944 @cindex Integer functions
2946 This chapter describes the GMP functions for performing integer arithmetic.
2947 These functions start with the prefix @code{mpz_}.
2949 GMP integers are stored in objects of type @code{mpz_t}.
2952 * Initializing Integers::
2953 * Assigning Integers::
2954 * Simultaneous Integer Init & Assign::
2955 * Converting Integers::
2956 * Integer Arithmetic::
2957 * Integer Division::
2958 * Integer Exponentiation::
2960 * Number Theoretic Functions::
2961 * Integer Comparisons::
2962 * Integer Logic and Bit Fiddling::
2964 * Integer Random Numbers::
2965 * Integer Import and Export::
2966 * Miscellaneous Integer Functions::
2967 * Integer Special Functions::
2970 @node Initializing Integers, Assigning Integers, Integer Functions, Integer Functions
2971 @comment node-name, next, previous, up
2972 @section Initialization Functions
2973 @cindex Integer initialization functions
2974 @cindex Initialization functions
2976 The functions for integer arithmetic assume that all integer objects are
2977 initialized. You do that by calling the function @code{mpz_init}. For
2985 mpz_add (integ, @dots{});
2987 mpz_sub (integ, @dots{});
2989 /* Unless the program is about to exit, do ... */
2994 As you can see, you can store new values any number of times, once an
2995 object is initialized.
2997 @deftypefun void mpz_init (mpz_t @var{x})
2998 Initialize @var{x}, and set its value to 0.
3001 @deftypefun void mpz_inits (mpz_t @var{x}, ...)
3002 Initialize a NULL-terminated list of @code{mpz_t} variables, and set their
3006 @deftypefun void mpz_init2 (mpz_t @var{x}, mp_bitcnt_t @var{n})
3007 Initialize @var{x}, with space for @var{n}-bit numbers, and set its value to 0.
3008 Calling this function instead of @code{mpz_init} or @code{mpz_inits} is never
3009 necessary; reallocation is handled automatically by GMP when needed.
3011 @var{n} is only the initial space, @var{x} will grow automatically in
3012 the normal way, if necessary, for subsequent values stored. @code{mpz_init2}
3013 makes it possible to avoid such reallocations if a maximum size is known in
3017 @deftypefun void mpz_clear (mpz_t @var{x})
3018 Free the space occupied by @var{x}. Call this function for all @code{mpz_t}
3019 variables when you are done with them.
3022 @deftypefun void mpz_clears (mpz_t @var{x}, ...)
3023 Free the space occupied by a NULL-terminated list of @code{mpz_t} variables.
3026 @deftypefun void mpz_realloc2 (mpz_t @var{x}, mp_bitcnt_t @var{n})
3027 Change the space allocated for @var{x} to @var{n} bits. The value in @var{x}
3028 is preserved if it fits, or is set to 0 if not.
3030 Calling this function is never necessary; reallocation is handled automatically
3031 by GMP when needed. But this function can be used to increase the space for a
3032 variable in order to avoid repeated automatic reallocations, or to decrease it
3033 to give memory back to the heap.
3037 @node Assigning Integers, Simultaneous Integer Init & Assign, Initializing Integers, Integer Functions
3038 @comment node-name, next, previous, up
3039 @section Assignment Functions
3040 @cindex Integer assignment functions
3041 @cindex Assignment functions
3043 These functions assign new values to already initialized integers
3044 (@pxref{Initializing Integers}).
3046 @deftypefun void mpz_set (mpz_t @var{rop}, mpz_t @var{op})
3047 @deftypefunx void mpz_set_ui (mpz_t @var{rop}, unsigned long int @var{op})
3048 @deftypefunx void mpz_set_si (mpz_t @var{rop}, signed long int @var{op})
3049 @deftypefunx void mpz_set_d (mpz_t @var{rop}, double @var{op})
3050 @deftypefunx void mpz_set_q (mpz_t @var{rop}, mpq_t @var{op})
3051 @deftypefunx void mpz_set_f (mpz_t @var{rop}, mpf_t @var{op})
3052 Set the value of @var{rop} from @var{op}.
3054 @code{mpz_set_d}, @code{mpz_set_q} and @code{mpz_set_f} truncate @var{op} to
3058 @deftypefun int mpz_set_str (mpz_t @var{rop}, char *@var{str}, int @var{base})
3059 Set the value of @var{rop} from @var{str}, a null-terminated C string in base
3060 @var{base}. White space is allowed in the string, and is simply ignored.
3062 The @var{base} may vary from 2 to 62, or if @var{base} is 0, then the leading
3063 characters are used: @code{0x} and @code{0X} for hexadecimal, @code{0b} and
3064 @code{0B} for binary, @code{0} for octal, or decimal otherwise.
3066 For bases up to 36, case is ignored; upper-case and lower-case letters have
3067 the same value. For bases 37 to 62, upper-case letter represent the usual
3068 10..35 while lower-case letter represent 36..61.
3070 This function returns 0 if the entire string is a valid number in base
3071 @var{base}. Otherwise it returns @minus{}1.
3073 @c It turns out that it is not entirely true that this function ignores
3074 @c white-space. It does ignore it between digits, but not after a minus sign
3075 @c or within or after ``0x''. Some thought was given to disallowing all
3076 @c whitespace, but that would be an incompatible change, whitespace has been
3077 @c documented as ignored ever since GMP 1.
3081 @deftypefun void mpz_swap (mpz_t @var{rop1}, mpz_t @var{rop2})
3082 Swap the values @var{rop1} and @var{rop2} efficiently.
3086 @node Simultaneous Integer Init & Assign, Converting Integers, Assigning Integers, Integer Functions
3087 @comment node-name, next, previous, up
3088 @section Combined Initialization and Assignment Functions
3089 @cindex Integer assignment functions
3090 @cindex Assignment functions
3091 @cindex Integer initialization functions
3092 @cindex Initialization functions
3094 For convenience, GMP provides a parallel series of initialize-and-set functions
3095 which initialize the output and then store the value there. These functions'
3096 names have the form @code{mpz_init_set@dots{}}
3098 Here is an example of using one:
3103 mpz_init_set_str (pie, "3141592653589793238462643383279502884", 10);
3105 mpz_sub (pie, @dots{});
3112 Once the integer has been initialized by any of the @code{mpz_init_set@dots{}}
3113 functions, it can be used as the source or destination operand for the ordinary
3114 integer functions. Don't use an initialize-and-set function on a variable
3115 already initialized!
3117 @deftypefun void mpz_init_set (mpz_t @var{rop}, mpz_t @var{op})
3118 @deftypefunx void mpz_init_set_ui (mpz_t @var{rop}, unsigned long int @var{op})
3119 @deftypefunx void mpz_init_set_si (mpz_t @var{rop}, signed long int @var{op})
3120 @deftypefunx void mpz_init_set_d (mpz_t @var{rop}, double @var{op})
3121 Initialize @var{rop} with limb space and set the initial numeric value from
3125 @deftypefun int mpz_init_set_str (mpz_t @var{rop}, char *@var{str}, int @var{base})
3126 Initialize @var{rop} and set its value like @code{mpz_set_str} (see its
3127 documentation above for details).
3129 If the string is a correct base @var{base} number, the function returns 0;
3130 if an error occurs it returns @minus{}1. @var{rop} is initialized even if
3131 an error occurs. (I.e., you have to call @code{mpz_clear} for it.)
3135 @node Converting Integers, Integer Arithmetic, Simultaneous Integer Init & Assign, Integer Functions
3136 @comment node-name, next, previous, up
3137 @section Conversion Functions
3138 @cindex Integer conversion functions
3139 @cindex Conversion functions
3141 This section describes functions for converting GMP integers to standard C
3142 types. Functions for converting @emph{to} GMP integers are described in
3143 @ref{Assigning Integers} and @ref{I/O of Integers}.
3145 @deftypefun {unsigned long int} mpz_get_ui (mpz_t @var{op})
3146 Return the value of @var{op} as an @code{unsigned long}.
3148 If @var{op} is too big to fit an @code{unsigned long} then just the least
3149 significant bits that do fit are returned. The sign of @var{op} is ignored,
3150 only the absolute value is used.
3153 @deftypefun {signed long int} mpz_get_si (mpz_t @var{op})
3154 If @var{op} fits into a @code{signed long int} return the value of @var{op}.
3155 Otherwise return the least significant part of @var{op}, with the same sign
3158 If @var{op} is too big to fit in a @code{signed long int}, the returned
3159 result is probably not very useful. To find out if the value will fit, use
3160 the function @code{mpz_fits_slong_p}.
3163 @deftypefun double mpz_get_d (mpz_t @var{op})
3164 Convert @var{op} to a @code{double}, truncating if necessary (ie.@: rounding
3167 If the exponent from the conversion is too big, the result is system
3168 dependent. An infinity is returned where available. A hardware overflow trap
3169 may or may not occur.
3172 @deftypefun double mpz_get_d_2exp (signed long int *@var{exp}, mpz_t @var{op})
3173 Convert @var{op} to a @code{double}, truncating if necessary (ie.@: rounding
3174 towards zero), and returning the exponent separately.
3176 The return value is in the range @math{0.5@le{}@GMPabs{@var{d}}<1} and the
3177 exponent is stored to @code{*@var{exp}}. @m{@var{d} * 2^{exp}, @var{d} *
3178 2^@var{exp}} is the (truncated) @var{op} value. If @var{op} is zero, the
3179 return is @math{0.0} and 0 is stored to @code{*@var{exp}}.
3181 @cindex @code{frexp}
3182 This is similar to the standard C @code{frexp} function (@pxref{Normalization
3183 Functions,,, libc, The GNU C Library Reference Manual}).
3186 @deftypefun {char *} mpz_get_str (char *@var{str}, int @var{base}, mpz_t @var{op})
3187 Convert @var{op} to a string of digits in base @var{base}. The base argument
3188 may vary from 2 to 62 or from @minus{}2 to @minus{}36.
3190 For @var{base} in the range 2..36, digits and lower-case letters are used; for
3191 @minus{}2..@minus{}36, digits and upper-case letters are used; for 37..62,
3192 digits, upper-case letters, and lower-case letters (in that significance order)
3195 If @var{str} is @code{NULL}, the result string is allocated using the current
3196 allocation function (@pxref{Custom Allocation}). The block will be
3197 @code{strlen(str)+1} bytes, that being exactly enough for the string and
3200 If @var{str} is not @code{NULL}, it should point to a block of storage large
3201 enough for the result, that being @code{mpz_sizeinbase (@var{op}, @var{base})
3202 + 2}. The two extra bytes are for a possible minus sign, and the
3205 A pointer to the result string is returned, being either the allocated block,
3206 or the given @var{str}.
3211 @node Integer Arithmetic, Integer Division, Converting Integers, Integer Functions
3212 @comment node-name, next, previous, up
3213 @section Arithmetic Functions
3214 @cindex Integer arithmetic functions
3215 @cindex Arithmetic functions
3217 @deftypefun void mpz_add (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3218 @deftypefunx void mpz_add_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
3219 Set @var{rop} to @math{@var{op1} + @var{op2}}.
3222 @deftypefun void mpz_sub (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3223 @deftypefunx void mpz_sub_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
3224 @deftypefunx void mpz_ui_sub (mpz_t @var{rop}, unsigned long int @var{op1}, mpz_t @var{op2})
3225 Set @var{rop} to @var{op1} @minus{} @var{op2}.
3228 @deftypefun void mpz_mul (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3229 @deftypefunx void mpz_mul_si (mpz_t @var{rop}, mpz_t @var{op1}, long int @var{op2})
3230 @deftypefunx void mpz_mul_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
3231 Set @var{rop} to @math{@var{op1} @GMPtimes{} @var{op2}}.
3234 @deftypefun void mpz_addmul (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3235 @deftypefunx void mpz_addmul_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
3236 Set @var{rop} to @math{@var{rop} + @var{op1} @GMPtimes{} @var{op2}}.
3239 @deftypefun void mpz_submul (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3240 @deftypefunx void mpz_submul_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
3241 Set @var{rop} to @math{@var{rop} - @var{op1} @GMPtimes{} @var{op2}}.
3244 @deftypefun void mpz_mul_2exp (mpz_t @var{rop}, mpz_t @var{op1}, mp_bitcnt_t @var{op2})
3245 @cindex Bit shift left
3246 Set @var{rop} to @m{@var{op1} \times 2^{op2}, @var{op1} times 2 raised to
3247 @var{op2}}. This operation can also be defined as a left shift by @var{op2}
3251 @deftypefun void mpz_neg (mpz_t @var{rop}, mpz_t @var{op})
3252 Set @var{rop} to @minus{}@var{op}.
3255 @deftypefun void mpz_abs (mpz_t @var{rop}, mpz_t @var{op})
3256 Set @var{rop} to the absolute value of @var{op}.
3261 @node Integer Division, Integer Exponentiation, Integer Arithmetic, Integer Functions
3262 @section Division Functions
3263 @cindex Integer division functions
3264 @cindex Division functions
3266 Division is undefined if the divisor is zero. Passing a zero divisor to the
3267 division or modulo functions (including the modular powering functions
3268 @code{mpz_powm} and @code{mpz_powm_ui}), will cause an intentional division by
3269 zero. This lets a program handle arithmetic exceptions in these functions the
3270 same way as for normal C @code{int} arithmetic.
3272 @c Separate deftypefun groups for cdiv, fdiv and tdiv produce a blank line
3273 @c between each, and seem to let tex do a better job of page breaks than an
3274 @c @sp 1 in the middle of one big set.
3276 @deftypefun void mpz_cdiv_q (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
3277 @deftypefunx void mpz_cdiv_r (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
3278 @deftypefunx void mpz_cdiv_qr (mpz_t @var{q}, mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
3280 @deftypefunx {unsigned long int} mpz_cdiv_q_ui (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{d}})
3281 @deftypefunx {unsigned long int} mpz_cdiv_r_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
3282 @deftypefunx {unsigned long int} mpz_cdiv_qr_ui (mpz_t @var{q}, mpz_t @var{r}, @w{mpz_t @var{n}}, @w{unsigned long int @var{d}})
3283 @deftypefunx {unsigned long int} mpz_cdiv_ui (mpz_t @var{n}, @w{unsigned long int @var{d}})
3285 @deftypefunx void mpz_cdiv_q_2exp (mpz_t @var{q}, mpz_t @var{n}, @w{mp_bitcnt_t @var{b}})
3286 @deftypefunx void mpz_cdiv_r_2exp (mpz_t @var{r}, mpz_t @var{n}, @w{mp_bitcnt_t @var{b}})
3289 @deftypefun void mpz_fdiv_q (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
3290 @deftypefunx void mpz_fdiv_r (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
3291 @deftypefunx void mpz_fdiv_qr (mpz_t @var{q}, mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
3293 @deftypefunx {unsigned long int} mpz_fdiv_q_ui (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{d}})
3294 @deftypefunx {unsigned long int} mpz_fdiv_r_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
3295 @deftypefunx {unsigned long int} mpz_fdiv_qr_ui (mpz_t @var{q}, mpz_t @var{r}, @w{mpz_t @var{n}}, @w{unsigned long int @var{d}})
3296 @deftypefunx {unsigned long int} mpz_fdiv_ui (mpz_t @var{n}, @w{unsigned long int @var{d}})
3298 @deftypefunx void mpz_fdiv_q_2exp (mpz_t @var{q}, mpz_t @var{n}, @w{mp_bitcnt_t @var{b}})
3299 @deftypefunx void mpz_fdiv_r_2exp (mpz_t @var{r}, mpz_t @var{n}, @w{mp_bitcnt_t @var{b}})
3302 @deftypefun void mpz_tdiv_q (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
3303 @deftypefunx void mpz_tdiv_r (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
3304 @deftypefunx void mpz_tdiv_qr (mpz_t @var{q}, mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
3306 @deftypefunx {unsigned long int} mpz_tdiv_q_ui (mpz_t @var{q}, mpz_t @var{n}, @w{unsigned long int @var{d}})
3307 @deftypefunx {unsigned long int} mpz_tdiv_r_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
3308 @deftypefunx {unsigned long int} mpz_tdiv_qr_ui (mpz_t @var{q}, mpz_t @var{r}, @w{mpz_t @var{n}}, @w{unsigned long int @var{d}})
3309 @deftypefunx {unsigned long int} mpz_tdiv_ui (mpz_t @var{n}, @w{unsigned long int @var{d}})
3311 @deftypefunx void mpz_tdiv_q_2exp (mpz_t @var{q}, mpz_t @var{n}, @w{mp_bitcnt_t @var{b}})
3312 @deftypefunx void mpz_tdiv_r_2exp (mpz_t @var{r}, mpz_t @var{n}, @w{mp_bitcnt_t @var{b}})
3313 @cindex Bit shift right
3316 Divide @var{n} by @var{d}, forming a quotient @var{q} and/or remainder
3317 @var{r}. For the @code{2exp} functions, @m{@var{d}=2^b, @var{d}=2^@var{b}}.
3318 The rounding is in three styles, each suiting different applications.
3322 @code{cdiv} rounds @var{q} up towards @m{+\infty, +infinity}, and @var{r} will
3323 have the opposite sign to @var{d}. The @code{c} stands for ``ceil''.
3326 @code{fdiv} rounds @var{q} down towards @m{-\infty, @minus{}infinity}, and
3327 @var{r} will have the same sign as @var{d}. The @code{f} stands for
3331 @code{tdiv} rounds @var{q} towards zero, and @var{r} will have the same sign
3332 as @var{n}. The @code{t} stands for ``truncate''.
3335 In all cases @var{q} and @var{r} will satisfy
3336 @m{@var{n}=@var{q}@var{d}+@var{r}, @var{n}=@var{q}*@var{d}+@var{r}}, and
3337 @var{r} will satisfy @math{0@le{}@GMPabs{@var{r}}<@GMPabs{@var{d}}}.
3339 The @code{q} functions calculate only the quotient, the @code{r} functions
3340 only the remainder, and the @code{qr} functions calculate both. Note that for
3341 @code{qr} the same variable cannot be passed for both @var{q} and @var{r}, or
3342 results will be unpredictable.
3344 For the @code{ui} variants the return value is the remainder, and in fact
3345 returning the remainder is all the @code{div_ui} functions do. For
3346 @code{tdiv} and @code{cdiv} the remainder can be negative, so for those the
3347 return value is the absolute value of the remainder.
3349 For the @code{2exp} variants the divisor is @m{2^b,2^@var{b}}. These
3350 functions are implemented as right shifts and bit masks, but of course they
3351 round the same as the other functions.
3353 For positive @var{n} both @code{mpz_fdiv_q_2exp} and @code{mpz_tdiv_q_2exp}
3354 are simple bitwise right shifts. For negative @var{n}, @code{mpz_fdiv_q_2exp}
3355 is effectively an arithmetic right shift treating @var{n} as twos complement
3356 the same as the bitwise logical functions do, whereas @code{mpz_tdiv_q_2exp}
3357 effectively treats @var{n} as sign and magnitude.
3360 @deftypefun void mpz_mod (mpz_t @var{r}, mpz_t @var{n}, mpz_t @var{d})
3361 @deftypefunx {unsigned long int} mpz_mod_ui (mpz_t @var{r}, mpz_t @var{n}, @w{unsigned long int @var{d}})
3362 Set @var{r} to @var{n} @code{mod} @var{d}. The sign of the divisor is
3363 ignored; the result is always non-negative.
3365 @code{mpz_mod_ui} is identical to @code{mpz_fdiv_r_ui} above, returning the
3366 remainder as well as setting @var{r}. See @code{mpz_fdiv_ui} above if only
3367 the return value is wanted.
3370 @deftypefun void mpz_divexact (mpz_t @var{q}, mpz_t @var{n}, mpz_t @var{d})
3371 @deftypefunx void mpz_divexact_ui (mpz_t @var{q}, mpz_t @var{n}, unsigned long @var{d})
3372 @cindex Exact division functions
3373 Set @var{q} to @var{n}/@var{d}. These functions produce correct results only
3374 when it is known in advance that @var{d} divides @var{n}.
3376 These routines are much faster than the other division functions, and are the
3377 best choice when exact division is known to occur, for example reducing a
3378 rational to lowest terms.
3381 @deftypefun int mpz_divisible_p (mpz_t @var{n}, mpz_t @var{d})
3382 @deftypefunx int mpz_divisible_ui_p (mpz_t @var{n}, unsigned long int @var{d})
3383 @deftypefunx int mpz_divisible_2exp_p (mpz_t @var{n}, mp_bitcnt_t @var{b})
3384 @cindex Divisibility functions
3385 Return non-zero if @var{n} is exactly divisible by @var{d}, or in the case of
3386 @code{mpz_divisible_2exp_p} by @m{2^b,2^@var{b}}.
3388 @var{n} is divisible by @var{d} if there exists an integer @var{q} satisfying
3389 @math{@var{n} = @var{q}@GMPmultiply{}@var{d}}. Unlike the other division
3390 functions, @math{@var{d}=0} is accepted and following the rule it can be seen
3391 that only 0 is considered divisible by 0.
3394 @deftypefun int mpz_congruent_p (mpz_t @var{n}, mpz_t @var{c}, mpz_t @var{d})
3395 @deftypefunx int mpz_congruent_ui_p (mpz_t @var{n}, unsigned long int @var{c}, unsigned long int @var{d})
3396 @deftypefunx int mpz_congruent_2exp_p (mpz_t @var{n}, mpz_t @var{c}, mp_bitcnt_t @var{b})
3397 @cindex Divisibility functions
3398 @cindex Congruence functions
3399 Return non-zero if @var{n} is congruent to @var{c} modulo @var{d}, or in the
3400 case of @code{mpz_congruent_2exp_p} modulo @m{2^b,2^@var{b}}.
3402 @var{n} is congruent to @var{c} mod @var{d} if there exists an integer @var{q}
3403 satisfying @math{@var{n} = @var{c} + @var{q}@GMPmultiply{}@var{d}}. Unlike
3404 the other division functions, @math{@var{d}=0} is accepted and following the
3405 rule it can be seen that @var{n} and @var{c} are considered congruent mod 0
3406 only when exactly equal.
3411 @node Integer Exponentiation, Integer Roots, Integer Division, Integer Functions
3412 @section Exponentiation Functions
3413 @cindex Integer exponentiation functions
3414 @cindex Exponentiation functions
3415 @cindex Powering functions
3417 @deftypefun void mpz_powm (mpz_t @var{rop}, mpz_t @var{base}, mpz_t @var{exp}, mpz_t @var{mod})
3418 @deftypefunx void mpz_powm_ui (mpz_t @var{rop}, mpz_t @var{base}, unsigned long int @var{exp}, mpz_t @var{mod})
3419 Set @var{rop} to @m{base^{exp} \bmod mod, (@var{base} raised to @var{exp})
3422 Negative @var{exp} is supported if an inverse @math{@var{base}^@W{-1} @bmod
3423 @var{mod}} exists (see @code{mpz_invert} in @ref{Number Theoretic Functions}).
3424 If an inverse doesn't exist then a divide by zero is raised.
3427 @deftypefun void mpz_powm_sec (mpz_t @var{rop}, mpz_t @var{base}, mpz_t @var{exp}, mpz_t @var{mod})
3428 Set @var{rop} to @m{base^{exp} \bmod mod, (@var{base} raised to @var{exp})
3431 It is required that @math{@var{exp} > 0} and that @var{mod} is odd.
3433 This function is designed to take the same time and have the same cache access
3434 patterns for any two same-size arguments, assuming that function arguments are
3435 placed at the same position and that the machine state is identical upon
3436 function entry. This function is intended for cryptographic purposes, where
3437 resilience to side-channel attacks is desired.
3440 @deftypefun void mpz_pow_ui (mpz_t @var{rop}, mpz_t @var{base}, unsigned long int @var{exp})
3441 @deftypefunx void mpz_ui_pow_ui (mpz_t @var{rop}, unsigned long int @var{base}, unsigned long int @var{exp})
3442 Set @var{rop} to @m{base^{exp}, @var{base} raised to @var{exp}}. The case
3443 @math{0^0} yields 1.
3448 @node Integer Roots, Number Theoretic Functions, Integer Exponentiation, Integer Functions
3449 @section Root Extraction Functions
3450 @cindex Integer root functions
3451 @cindex Root extraction functions
3453 @deftypefun int mpz_root (mpz_t @var{rop}, mpz_t @var{op}, unsigned long int @var{n})
3454 Set @var{rop} to @m{\lfloor\root n \of {op}\rfloor@C{},} the truncated integer
3455 part of the @var{n}th root of @var{op}. Return non-zero if the computation
3456 was exact, i.e., if @var{op} is @var{rop} to the @var{n}th power.
3459 @deftypefun void mpz_rootrem (mpz_t @var{root}, mpz_t @var{rem}, mpz_t @var{u}, unsigned long int @var{n})
3460 Set @var{root} to @m{\lfloor\root n \of {u}\rfloor@C{},} the truncated
3461 integer part of the @var{n}th root of @var{u}. Set @var{rem} to the
3462 remainder, @m{(@var{u} - @var{root}^n),
3463 @var{u}@minus{}@var{root}**@var{n}}.
3466 @deftypefun void mpz_sqrt (mpz_t @var{rop}, mpz_t @var{op})
3467 Set @var{rop} to @m{\lfloor\sqrt{@var{op}}\rfloor@C{},} the truncated
3468 integer part of the square root of @var{op}.
3471 @deftypefun void mpz_sqrtrem (mpz_t @var{rop1}, mpz_t @var{rop2}, mpz_t @var{op})
3472 Set @var{rop1} to @m{\lfloor\sqrt{@var{op}}\rfloor, the truncated integer part
3473 of the square root of @var{op}}, like @code{mpz_sqrt}. Set @var{rop2} to the
3474 remainder @m{(@var{op} - @var{rop1}^2),
3475 @var{op}@minus{}@var{rop1}*@var{rop1}}, which will be zero if @var{op} is a
3478 If @var{rop1} and @var{rop2} are the same variable, the results are
3482 @deftypefun int mpz_perfect_power_p (mpz_t @var{op})
3483 @cindex Perfect power functions
3484 @cindex Root testing functions
3485 Return non-zero if @var{op} is a perfect power, i.e., if there exist integers
3486 @m{a,@var{a}} and @m{b,@var{b}}, with @m{b>1, @var{b}>1}, such that
3487 @m{@var{op}=a^b, @var{op} equals @var{a} raised to the power @var{b}}.
3489 Under this definition both 0 and 1 are considered to be perfect powers.
3490 Negative values of @var{op} are accepted, but of course can only be odd
3494 @deftypefun int mpz_perfect_square_p (mpz_t @var{op})
3495 @cindex Perfect square functions
3496 @cindex Root testing functions
3497 Return non-zero if @var{op} is a perfect square, i.e., if the square root of
3498 @var{op} is an integer. Under this definition both 0 and 1 are considered to
3504 @node Number Theoretic Functions, Integer Comparisons, Integer Roots, Integer Functions
3505 @section Number Theoretic Functions
3506 @cindex Number theoretic functions
3508 @deftypefun int mpz_probab_prime_p (mpz_t @var{n}, int @var{reps})
3509 @cindex Prime testing functions
3510 @cindex Probable prime testing functions
3511 Determine whether @var{n} is prime. Return 2 if @var{n} is definitely prime,
3512 return 1 if @var{n} is probably prime (without being certain), or return 0 if
3513 @var{n} is definitely composite.
3515 This function does some trial divisions, then some Miller-Rabin probabilistic
3516 primality tests. @var{reps} controls how many such tests are done, 5 to 10 is
3517 a reasonable number, more will reduce the chances of a composite being
3518 returned as ``probably prime''.
3520 Miller-Rabin and similar tests can be more properly called compositeness
3521 tests. Numbers which fail are known to be composite but those which pass
3522 might be prime or might be composite. Only a few composites pass, hence those
3523 which pass are considered probably prime.
3526 @deftypefun void mpz_nextprime (mpz_t @var{rop}, mpz_t @var{op})
3527 @cindex Next prime function
3528 Set @var{rop} to the next prime greater than @var{op}.
3530 This function uses a probabilistic algorithm to identify primes. For
3531 practical purposes it's adequate, the chance of a composite passing will be
3535 @c mpz_prime_p not implemented as of gmp 3.0.
3537 @c @deftypefun int mpz_prime_p (mpz_t @var{n})
3538 @c Return non-zero if @var{n} is prime and zero if @var{n} is a non-prime.
3539 @c This function is far slower than @code{mpz_probab_prime_p}, but then it
3540 @c never returns non-zero for composite numbers.
3542 @c (For practical purposes, using @code{mpz_probab_prime_p} is adequate.
3543 @c The likelihood of a programming error or hardware malfunction is orders
3544 @c of magnitudes greater than the likelihood for a composite to pass as a
3545 @c prime, if the @var{reps} argument is in the suggested range.)
3548 @deftypefun void mpz_gcd (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3549 @cindex Greatest common divisor functions
3550 @cindex GCD functions
3551 Set @var{rop} to the greatest common divisor of @var{op1} and @var{op2}.
3552 The result is always positive even if one or both input operands
3556 @deftypefun {unsigned long int} mpz_gcd_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long int @var{op2})
3557 Compute the greatest common divisor of @var{op1} and @var{op2}. If
3558 @var{rop} is not @code{NULL}, store the result there.
3560 If the result is small enough to fit in an @code{unsigned long int}, it is
3561 returned. If the result does not fit, 0 is returned, and the result is equal
3562 to the argument @var{op1}. Note that the result will always fit if @var{op2}
3566 @deftypefun void mpz_gcdext (mpz_t @var{g}, mpz_t @var{s}, mpz_t @var{t}, mpz_t @var{a}, mpz_t @var{b})
3567 @cindex Extended GCD
3568 @cindex GCD extended
3569 Set @var{g} to the greatest common divisor of @var{a} and @var{b}, and in
3570 addition set @var{s} and @var{t} to coefficients satisfying
3571 @math{@var{a}@GMPmultiply{}@var{s} + @var{b}@GMPmultiply{}@var{t} = @var{g}}.
3572 The value in @var{g} is always positive, even if one or both of @var{a} and
3573 @var{b} are negative. The values in @var{s} and @var{t} are chosen such that
3574 @math{@GMPabs{@var{s}} @le{} @GMPabs{@var{b}}} and @math{@GMPabs{@var{t}}
3575 @le{} @GMPabs{@var{a}}}.
3577 If @var{t} is @code{NULL} then that value is not computed.
3580 @deftypefun void mpz_lcm (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3581 @deftypefunx void mpz_lcm_ui (mpz_t @var{rop}, mpz_t @var{op1}, unsigned long @var{op2})
3582 @cindex Least common multiple functions
3583 @cindex LCM functions
3584 Set @var{rop} to the least common multiple of @var{op1} and @var{op2}.
3585 @var{rop} is always positive, irrespective of the signs of @var{op1} and
3586 @var{op2}. @var{rop} will be zero if either @var{op1} or @var{op2} is zero.
3589 @deftypefun int mpz_invert (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3590 @cindex Modular inverse functions
3591 @cindex Inverse modulo functions
3592 Compute the inverse of @var{op1} modulo @var{op2} and put the result in
3593 @var{rop}. If the inverse exists, the return value is non-zero and @var{rop}
3594 will satisfy @math{0 @le{} @var{rop} < @var{op2}}. If an inverse doesn't exist
3595 the return value is zero and @var{rop} is undefined.
3598 @deftypefun int mpz_jacobi (mpz_t @var{a}, mpz_t @var{b})
3599 @cindex Jacobi symbol functions
3600 Calculate the Jacobi symbol @m{\left(a \over b\right),
3601 (@var{a}/@var{b})}. This is defined only for @var{b} odd.
3604 @deftypefun int mpz_legendre (mpz_t @var{a}, mpz_t @var{p})
3605 @cindex Legendre symbol functions
3606 Calculate the Legendre symbol @m{\left(a \over p\right),
3607 (@var{a}/@var{p})}. This is defined only for @var{p} an odd positive
3608 prime, and for such @var{p} it's identical to the Jacobi symbol.
3611 @deftypefun int mpz_kronecker (mpz_t @var{a}, mpz_t @var{b})
3612 @deftypefunx int mpz_kronecker_si (mpz_t @var{a}, long @var{b})
3613 @deftypefunx int mpz_kronecker_ui (mpz_t @var{a}, unsigned long @var{b})
3614 @deftypefunx int mpz_si_kronecker (long @var{a}, mpz_t @var{b})
3615 @deftypefunx int mpz_ui_kronecker (unsigned long @var{a}, mpz_t @var{b})
3616 @cindex Kronecker symbol functions
3617 Calculate the Jacobi symbol @m{\left(a \over b\right),
3618 (@var{a}/@var{b})} with the Kronecker extension @m{\left(a \over
3619 2\right) = \left(2 \over a\right), (a/2)=(2/a)} when @math{a} odd, or
3620 @m{\left(a \over 2\right) = 0, (a/2)=0} when @math{a} even.
3622 When @var{b} is odd the Jacobi symbol and Kronecker symbol are
3623 identical, so @code{mpz_kronecker_ui} etc can be used for mixed
3624 precision Jacobi symbols too.
3626 For more information see Henri Cohen section 1.4.2 (@pxref{References}),
3627 or any number theory textbook. See also the example program
3628 @file{demos/qcn.c} which uses @code{mpz_kronecker_ui}.
3631 @deftypefun {mp_bitcnt_t} mpz_remove (mpz_t @var{rop}, mpz_t @var{op}, mpz_t @var{f})
3632 @cindex Remove factor functions
3633 @cindex Factor removal functions
3634 Remove all occurrences of the factor @var{f} from @var{op} and store the
3635 result in @var{rop}. The return value is how many such occurrences were
3639 @deftypefun void mpz_fac_ui (mpz_t @var{rop}, unsigned long int @var{op})
3640 @cindex Factorial functions
3641 Set @var{rop} to @var{op}!, the factorial of @var{op}.
3644 @deftypefun void mpz_bin_ui (mpz_t @var{rop}, mpz_t @var{n}, unsigned long int @var{k})
3645 @deftypefunx void mpz_bin_uiui (mpz_t @var{rop}, unsigned long int @var{n}, @w{unsigned long int @var{k}})
3646 @cindex Binomial coefficient functions
3647 Compute the binomial coefficient @m{\left({n}\atop{k}\right), @var{n} over
3648 @var{k}} and store the result in @var{rop}. Negative values of @var{n} are
3649 supported by @code{mpz_bin_ui}, using the identity
3650 @m{\left({-n}\atop{k}\right) = (-1)^k \left({n+k-1}\atop{k}\right),
3651 bin(-n@C{}k) = (-1)^k * bin(n+k-1@C{}k)}, see Knuth volume 1 section 1.2.6
3655 @deftypefun void mpz_fib_ui (mpz_t @var{fn}, unsigned long int @var{n})
3656 @deftypefunx void mpz_fib2_ui (mpz_t @var{fn}, mpz_t @var{fnsub1}, unsigned long int @var{n})
3657 @cindex Fibonacci sequence functions
3658 @code{mpz_fib_ui} sets @var{fn} to to @m{F_n,F[n]}, the @var{n}'th Fibonacci
3659 number. @code{mpz_fib2_ui} sets @var{fn} to @m{F_n,F[n]}, and @var{fnsub1} to
3662 These functions are designed for calculating isolated Fibonacci numbers. When
3663 a sequence of values is wanted it's best to start with @code{mpz_fib2_ui} and
3664 iterate the defining @m{F_{n+1} = F_n + F_{n-1}, F[n+1]=F[n]+F[n-1]} or
3668 @deftypefun void mpz_lucnum_ui (mpz_t @var{ln}, unsigned long int @var{n})
3669 @deftypefunx void mpz_lucnum2_ui (mpz_t @var{ln}, mpz_t @var{lnsub1}, unsigned long int @var{n})
3670 @cindex Lucas number functions
3671 @code{mpz_lucnum_ui} sets @var{ln} to to @m{L_n,L[n]}, the @var{n}'th Lucas
3672 number. @code{mpz_lucnum2_ui} sets @var{ln} to @m{L_n,L[n]}, and @var{lnsub1}
3673 to @m{L_{n-1},L[n-1]}.
3675 These functions are designed for calculating isolated Lucas numbers. When a
3676 sequence of values is wanted it's best to start with @code{mpz_lucnum2_ui} and
3677 iterate the defining @m{L_{n+1} = L_n + L_{n-1}, L[n+1]=L[n]+L[n-1]} or
3680 The Fibonacci numbers and Lucas numbers are related sequences, so it's never
3681 necessary to call both @code{mpz_fib2_ui} and @code{mpz_lucnum2_ui}. The
3682 formulas for going from Fibonacci to Lucas can be found in @ref{Lucas Numbers
3683 Algorithm}, the reverse is straightforward too.
3687 @node Integer Comparisons, Integer Logic and Bit Fiddling, Number Theoretic Functions, Integer Functions
3688 @comment node-name, next, previous, up
3689 @section Comparison Functions
3690 @cindex Integer comparison functions
3691 @cindex Comparison functions
3693 @deftypefn Function int mpz_cmp (mpz_t @var{op1}, mpz_t @var{op2})
3694 @deftypefnx Function int mpz_cmp_d (mpz_t @var{op1}, double @var{op2})
3695 @deftypefnx Macro int mpz_cmp_si (mpz_t @var{op1}, signed long int @var{op2})
3696 @deftypefnx Macro int mpz_cmp_ui (mpz_t @var{op1}, unsigned long int @var{op2})
3697 Compare @var{op1} and @var{op2}. Return a positive value if @math{@var{op1} >
3698 @var{op2}}, zero if @math{@var{op1} = @var{op2}}, or a negative value if
3699 @math{@var{op1} < @var{op2}}.
3701 @code{mpz_cmp_ui} and @code{mpz_cmp_si} are macros and will evaluate their
3702 arguments more than once. @code{mpz_cmp_d} can be called with an infinity,
3703 but results are undefined for a NaN.
3706 @deftypefn Function int mpz_cmpabs (mpz_t @var{op1}, mpz_t @var{op2})
3707 @deftypefnx Function int mpz_cmpabs_d (mpz_t @var{op1}, double @var{op2})
3708 @deftypefnx Function int mpz_cmpabs_ui (mpz_t @var{op1}, unsigned long int @var{op2})
3709 Compare the absolute values of @var{op1} and @var{op2}. Return a positive
3710 value if @math{@GMPabs{@var{op1}} > @GMPabs{@var{op2}}}, zero if
3711 @math{@GMPabs{@var{op1}} = @GMPabs{@var{op2}}}, or a negative value if
3712 @math{@GMPabs{@var{op1}} < @GMPabs{@var{op2}}}.
3714 @code{mpz_cmpabs_d} can be called with an infinity, but results are undefined
3718 @deftypefn Macro int mpz_sgn (mpz_t @var{op})
3720 @cindex Integer sign tests
3721 Return @math{+1} if @math{@var{op} > 0}, 0 if @math{@var{op} = 0}, and
3722 @math{-1} if @math{@var{op} < 0}.
3724 This function is actually implemented as a macro. It evaluates its argument
3729 @node Integer Logic and Bit Fiddling, I/O of Integers, Integer Comparisons, Integer Functions
3730 @comment node-name, next, previous, up
3731 @section Logical and Bit Manipulation Functions
3732 @cindex Logical functions
3733 @cindex Bit manipulation functions
3734 @cindex Integer logical functions
3735 @cindex Integer bit manipulation functions
3737 These functions behave as if twos complement arithmetic were used (although
3738 sign-magnitude is the actual implementation). The least significant bit is
3741 @deftypefun void mpz_and (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3742 Set @var{rop} to @var{op1} bitwise-and @var{op2}.
3745 @deftypefun void mpz_ior (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3746 Set @var{rop} to @var{op1} bitwise inclusive-or @var{op2}.
3749 @deftypefun void mpz_xor (mpz_t @var{rop}, mpz_t @var{op1}, mpz_t @var{op2})
3750 Set @var{rop} to @var{op1} bitwise exclusive-or @var{op2}.
3753 @deftypefun void mpz_com (mpz_t @var{rop}, mpz_t @var{op})
3754 Set @var{rop} to the one's complement of @var{op}.
3757 @deftypefun {mp_bitcnt_t} mpz_popcount (mpz_t @var{op})
3758 If @math{@var{op}@ge{}0}, return the population count of @var{op}, which is the
3759 number of 1 bits in the binary representation. If @math{@var{op}<0}, the
3760 number of 1s is infinite, and the return value is the largest possible
3764 @deftypefun {mp_bitcnt_t} mpz_hamdist (mpz_t @var{op1}, mpz_t @var{op2})
3765 If @var{op1} and @var{op2} are both @math{@ge{}0} or both @math{<0}, return the
3766 hamming distance between the two operands, which is the number of bit positions
3767 where @var{op1} and @var{op2} have different bit values. If one operand is
3768 @math{@ge{}0} and the other @math{<0} then the number of bits different is
3769 infinite, and the return value is the largest possible @code{mp_bitcnt_t}.
3772 @deftypefun {mp_bitcnt_t} mpz_scan0 (mpz_t @var{op}, mp_bitcnt_t @var{starting_bit})
3773 @deftypefunx {mp_bitcnt_t} mpz_scan1 (mpz_t @var{op}, mp_bitcnt_t @var{starting_bit})
3774 @cindex Bit scanning functions
3775 @cindex Scan bit functions
3776 Scan @var{op}, starting from bit @var{starting_bit}, towards more significant
3777 bits, until the first 0 or 1 bit (respectively) is found. Return the index of
3780 If the bit at @var{starting_bit} is already what's sought, then
3781 @var{starting_bit} is returned.
3783 If there's no bit found, then the largest possible @code{mp_bitcnt_t} is
3784 returned. This will happen in @code{mpz_scan0} past the end of a negative
3785 number, or @code{mpz_scan1} past the end of a nonnegative number.
3788 @deftypefun void mpz_setbit (mpz_t @var{rop}, mp_bitcnt_t @var{bit_index})
3789 Set bit @var{bit_index} in @var{rop}.
3792 @deftypefun void mpz_clrbit (mpz_t @var{rop}, mp_bitcnt_t @var{bit_index})
3793 Clear bit @var{bit_index} in @var{rop}.
3796 @deftypefun void mpz_combit (mpz_t @var{rop}, mp_bitcnt_t @var{bit_index})
3797 Complement bit @var{bit_index} in @var{rop}.
3800 @deftypefun int mpz_tstbit (mpz_t @var{op}, mp_bitcnt_t @var{bit_index})
3801 Test bit @var{bit_index} in @var{op} and return 0 or 1 accordingly.
3804 @node I/O of Integers, Integer Random Numbers, Integer Logic and Bit Fiddling, Integer Functions
3805 @comment node-name, next, previous, up
3806 @section Input and Output Functions
3807 @cindex Integer input and output functions
3808 @cindex Input functions
3809 @cindex Output functions
3810 @cindex I/O functions
3812 Functions that perform input from a stdio stream, and functions that output to
3813 a stdio stream. Passing a @code{NULL} pointer for a @var{stream} argument to any of
3814 these functions will make them read from @code{stdin} and write to
3815 @code{stdout}, respectively.
3817 When using any of these functions, it is a good idea to include @file{stdio.h}
3818 before @file{gmp.h}, since that will allow @file{gmp.h} to define prototypes
3819 for these functions.
3821 @deftypefun size_t mpz_out_str (FILE *@var{stream}, int @var{base}, mpz_t @var{op})
3822 Output @var{op} on stdio stream @var{stream}, as a string of digits in base
3823 @var{base}. The base argument may vary from 2 to 62 or from @minus{}2 to
3826 For @var{base} in the range 2..36, digits and lower-case letters are used; for
3827 @minus{}2..@minus{}36, digits and upper-case letters are used; for 37..62,
3828 digits, upper-case letters, and lower-case letters (in that significance order)
3831 Return the number of bytes written, or if an error occurred, return 0.
3834 @deftypefun size_t mpz_inp_str (mpz_t @var{rop}, FILE *@var{stream}, int @var{base})
3835 Input a possibly white-space preceded string in base @var{base} from stdio
3836 stream @var{stream}, and put the read integer in @var{rop}.
3838 The @var{base} may vary from 2 to 62, or if @var{base} is 0, then the leading
3839 characters are used: @code{0x} and @code{0X} for hexadecimal, @code{0b} and
3840 @code{0B} for binary, @code{0} for octal, or decimal otherwise.
3842 For bases up to 36, case is ignored; upper-case and lower-case letters have
3843 the same value. For bases 37 to 62, upper-case letter represent the usual
3844 10..35 while lower-case letter represent 36..61.
3846 Return the number of bytes read, or if an error occurred, return 0.
3849 @deftypefun size_t mpz_out_raw (FILE *@var{stream}, mpz_t @var{op})
3850 Output @var{op} on stdio stream @var{stream}, in raw binary format. The
3851 integer is written in a portable format, with 4 bytes of size information, and
3852 that many bytes of limbs. Both the size and the limbs are written in
3853 decreasing significance order (i.e., in big-endian).
3855 The output can be read with @code{mpz_inp_raw}.
3857 Return the number of bytes written, or if an error occurred, return 0.
3859 The output of this can not be read by @code{mpz_inp_raw} from GMP 1, because
3860 of changes necessary for compatibility between 32-bit and 64-bit machines.
3863 @deftypefun size_t mpz_inp_raw (mpz_t @var{rop}, FILE *@var{stream})
3864 Input from stdio stream @var{stream} in the format written by
3865 @code{mpz_out_raw}, and put the result in @var{rop}. Return the number of
3866 bytes read, or if an error occurred, return 0.
3868 This routine can read the output from @code{mpz_out_raw} also from GMP 1, in
3869 spite of changes necessary for compatibility between 32-bit and 64-bit
3875 @node Integer Random Numbers, Integer Import and Export, I/O of Integers, Integer Functions
3876 @comment node-name, next, previous, up
3877 @section Random Number Functions
3878 @cindex Integer random number functions
3879 @cindex Random number functions
3881 The random number functions of GMP come in two groups; older function
3882 that rely on a global state, and newer functions that accept a state
3883 parameter that is read and modified. Please see the @ref{Random Number
3884 Functions} for more information on how to use and not to use random
3887 @deftypefun void mpz_urandomb (mpz_t @var{rop}, gmp_randstate_t @var{state}, mp_bitcnt_t @var{n})
3888 Generate a uniformly distributed random integer in the range 0 to @m{2^n-1,
3889 2^@var{n}@minus{}1}, inclusive.
3891 The variable @var{state} must be initialized by calling one of the
3892 @code{gmp_randinit} functions (@ref{Random State Initialization}) before
3893 invoking this function.
3896 @deftypefun void mpz_urandomm (mpz_t @var{rop}, gmp_randstate_t @var{state}, mpz_t @var{n})
3897 Generate a uniform random integer in the range 0 to @math{@var{n}-1},
3900 The variable @var{state} must be initialized by calling one of the
3901 @code{gmp_randinit} functions (@ref{Random State Initialization})
3902 before invoking this function.
3905 @deftypefun void mpz_rrandomb (mpz_t @var{rop}, gmp_randstate_t @var{state}, mp_bitcnt_t @var{n})
3906 Generate a random integer with long strings of zeros and ones in the
3907 binary representation. Useful for testing functions and algorithms,
3908 since this kind of random numbers have proven to be more likely to
3909 trigger corner-case bugs. The random number will be in the range
3910 0 to @m{2^n-1, 2^@var{n}@minus{}1}, inclusive.
3912 The variable @var{state} must be initialized by calling one of the
3913 @code{gmp_randinit} functions (@ref{Random State Initialization})
3914 before invoking this function.
3917 @deftypefun void mpz_random (mpz_t @var{rop}, mp_size_t @var{max_size})
3918 Generate a random integer of at most @var{max_size} limbs. The generated
3919 random number doesn't satisfy any particular requirements of randomness.
3920 Negative random numbers are generated when @var{max_size} is negative.
3922 This function is obsolete. Use @code{mpz_urandomb} or
3923 @code{mpz_urandomm} instead.
3926 @deftypefun void mpz_random2 (mpz_t @var{rop}, mp_size_t @var{max_size})
3927 Generate a random integer of at most @var{max_size} limbs, with long strings
3928 of zeros and ones in the binary representation. Useful for testing functions
3929 and algorithms, since this kind of random numbers have proven to be more
3930 likely to trigger corner-case bugs. Negative random numbers are generated
3931 when @var{max_size} is negative.
3933 This function is obsolete. Use @code{mpz_rrandomb} instead.
3937 @node Integer Import and Export, Miscellaneous Integer Functions, Integer Random Numbers, Integer Functions
3938 @section Integer Import and Export
3940 @code{mpz_t} variables can be converted to and from arbitrary words of binary
3941 data with the following functions.
3943 @deftypefun void mpz_import (mpz_t @var{rop}, size_t @var{count}, int @var{order}, size_t @var{size}, int @var{endian}, size_t @var{nails}, const void *@var{op})
3944 @cindex Integer import
3946 Set @var{rop} from an array of word data at @var{op}.
3948 The parameters specify the format of the data. @var{count} many words are
3949 read, each @var{size} bytes. @var{order} can be 1 for most significant word
3950 first or -1 for least significant first. Within each word @var{endian} can be
3951 1 for most significant byte first, -1 for least significant first, or 0 for
3952 the native endianness of the host CPU@. The most significant @var{nails} bits
3953 of each word are skipped, this can be 0 to use the full words.
3955 There is no sign taken from the data, @var{rop} will simply be a positive
3956 integer. An application can handle any sign itself, and apply it for instance
3957 with @code{mpz_neg}.
3959 There are no data alignment restrictions on @var{op}, any address is allowed.
3961 Here's an example converting an array of @code{unsigned long} data, most
3962 significant element first, and host byte order within each value.
3965 unsigned long a[20];
3966 /* Initialize @var{z} and @var{a} */
3967 mpz_import (z, 20, 1, sizeof(a[0]), 0, 0, a);
3970 This example assumes the full @code{sizeof} bytes are used for data in the
3971 given type, which is usually true, and certainly true for @code{unsigned long}
3972 everywhere we know of. However on Cray vector systems it may be noted that
3973 @code{short} and @code{int} are always stored in 8 bytes (and with
3974 @code{sizeof} indicating that) but use only 32 or 46 bits. The @var{nails}
3975 feature can account for this, by passing for instance
3976 @code{8*sizeof(int)-INT_BIT}.
3979 @deftypefun {void *} mpz_export (void *@var{rop}, size_t *@var{countp}, int @var{order}, size_t @var{size}, int @var{endian}, size_t @var{nails}, mpz_t @var{op})
3980 @cindex Integer export
3982 Fill @var{rop} with word data from @var{op}.
3984 The parameters specify the format of the data produced. Each word will be
3985 @var{size} bytes and @var{order} can be 1 for most significant word first or
3986 -1 for least significant first. Within each word @var{endian} can be 1 for
3987 most significant byte first, -1 for least significant first, or 0 for the
3988 native endianness of the host CPU@. The most significant @var{nails} bits of
3989 each word are unused and set to zero, this can be 0 to produce full words.
3991 The number of words produced is written to @code{*@var{countp}}, or
3992 @var{countp} can be @code{NULL} to discard the count. @var{rop} must have
3993 enough space for the data, or if @var{rop} is @code{NULL} then a result array
3994 of the necessary size is allocated using the current GMP allocation function
3995 (@pxref{Custom Allocation}). In either case the return value is the
3996 destination used, either @var{rop} or the allocated block.
3998 If @var{op} is non-zero then the most significant word produced will be
3999 non-zero. If @var{op} is zero then the count returned will be zero and
4000 nothing written to @var{rop}. If @var{rop} is @code{NULL} in this case, no
4001 block is allocated, just @code{NULL} is returned.
4003 The sign of @var{op} is ignored, just the absolute value is exported. An
4004 application can use @code{mpz_sgn} to get the sign and handle it as desired.
4005 (@pxref{Integer Comparisons})
4007 There are no data alignment restrictions on @var{rop}, any address is allowed.
4009 When an application is allocating space itself the required size can be
4010 determined with a calculation like the following. Since @code{mpz_sizeinbase}
4011 always returns at least 1, @code{count} here will be at least one, which
4012 avoids any portability problems with @code{malloc(0)}, though if @code{z} is
4013 zero no space at all is actually needed (or written).
4016 numb = 8*size - nail;
4017 count = (mpz_sizeinbase (z, 2) + numb-1) / numb;
4018 p = malloc (count * size);
4024 @node Miscellaneous Integer Functions, Integer Special Functions, Integer Import and Export, Integer Functions
4025 @comment node-name, next, previous, up
4026 @section Miscellaneous Functions
4027 @cindex Miscellaneous integer functions
4028 @cindex Integer miscellaneous functions
4030 @deftypefun int mpz_fits_ulong_p (mpz_t @var{op})
4031 @deftypefunx int mpz_fits_slong_p (mpz_t @var{op})
4032 @deftypefunx int mpz_fits_uint_p (mpz_t @var{op})
4033 @deftypefunx int mpz_fits_sint_p (mpz_t @var{op})
4034 @deftypefunx int mpz_fits_ushort_p (mpz_t @var{op})
4035 @deftypefunx int mpz_fits_sshort_p (mpz_t @var{op})
4036 Return non-zero iff the value of @var{op} fits in an @code{unsigned long int},
4037 @code{signed long int}, @code{unsigned int}, @code{signed int}, @code{unsigned
4038 short int}, or @code{signed short int}, respectively. Otherwise, return zero.
4041 @deftypefn Macro int mpz_odd_p (mpz_t @var{op})
4042 @deftypefnx Macro int mpz_even_p (mpz_t @var{op})
4043 Determine whether @var{op} is odd or even, respectively. Return non-zero if
4044 yes, zero if no. These macros evaluate their argument more than once.
4047 @deftypefun size_t mpz_sizeinbase (mpz_t @var{op}, int @var{base})
4048 @cindex Size in digits
4049 @cindex Digits in an integer
4050 Return the size of @var{op} measured in number of digits in the given
4051 @var{base}. @var{base} can vary from 2 to 62. The sign of @var{op} is
4052 ignored, just the absolute value is used. The result will be either exact or
4053 1 too big. If @var{base} is a power of 2, the result is always exact. If
4054 @var{op} is zero the return value is always 1.
4056 This function can be used to determine the space required when converting
4057 @var{op} to a string. The right amount of allocation is normally two more
4058 than the value returned by @code{mpz_sizeinbase}, one extra for a minus sign
4059 and one for the null-terminator.
4061 @cindex Most significant bit
4062 It will be noted that @code{mpz_sizeinbase(@var{op},2)} can be used to locate
4063 the most significant 1 bit in @var{op}, counting from 1. (Unlike the bitwise
4064 functions which start from 0, @xref{Integer Logic and Bit Fiddling,, Logical
4065 and Bit Manipulation Functions}.)
4069 @node Integer Special Functions, , Miscellaneous Integer Functions, Integer Functions
4070 @section Special Functions
4071 @cindex Special integer functions
4072 @cindex Integer special functions
4074 The functions in this section are for various special purposes. Most
4075 applications will not need them.
4077 @deftypefun void mpz_array_init (mpz_t @var{integer_array}, mp_size_t @var{array_size}, @w{mp_size_t @var{fixed_num_bits}})
4078 This is a special type of initialization. @strong{Fixed} space of
4079 @var{fixed_num_bits} is allocated to each of the @var{array_size} integers in
4080 @var{integer_array}. There is no way to free the storage allocated by this
4081 function. Don't call @code{mpz_clear}!
4083 The @var{integer_array} parameter is the first @code{mpz_t} in the array. For
4088 mpz_array_init (arr[0], 20000, 512);
4091 @c In case anyone's wondering, yes this parameter style is a bit anomalous,
4092 @c it'd probably be nicer if it was "arr" instead of "arr[0]". Obviously the
4093 @c two differ only in the declaration, not the pointer value, but changing is
4094 @c not possible since it'd provoke warnings or errors in existing sources.
4096 This function is only intended for programs that create a large number
4097 of integers and need to reduce memory usage by avoiding the overheads of
4098 allocating and reallocating lots of small blocks. In normal programs this
4099 function is not recommended.
4101 The space allocated to each integer by this function will not be automatically
4102 increased, unlike the normal @code{mpz_init}, so an application must ensure it
4103 is sufficient for any value stored. The following space requirements apply to
4108 @code{mpz_abs}, @code{mpz_neg}, @code{mpz_set}, @code{mpz_set_si} and
4109 @code{mpz_set_ui} need room for the value they store.
4112 @code{mpz_add}, @code{mpz_add_ui}, @code{mpz_sub} and @code{mpz_sub_ui} need
4113 room for the larger of the two operands, plus an extra
4114 @code{mp_bits_per_limb}.
4117 @code{mpz_mul}, @code{mpz_mul_ui} and @code{mpz_mul_ui} need room for the sum
4118 of the number of bits in their operands, but each rounded up to a multiple of
4119 @code{mp_bits_per_limb}.
4122 @code{mpz_swap} can be used between two array variables, but not between an
4123 array and a normal variable.
4126 For other functions, or if in doubt, the suggestion is to calculate in a
4127 regular @code{mpz_init} variable and copy the result to an array variable with
4131 @deftypefun {void *} _mpz_realloc (mpz_t @var{integer}, mp_size_t @var{new_alloc})
4132 Change the space for @var{integer} to @var{new_alloc} limbs. The value in
4133 @var{integer} is preserved if it fits, or is set to 0 if not. The return
4134 value is not useful to applications and should be ignored.
4136 @code{mpz_realloc2} is the preferred way to accomplish allocation changes like
4137 this. @code{mpz_realloc2} and @code{_mpz_realloc} are the same except that
4138 @code{_mpz_realloc} takes its size in limbs.
4141 @deftypefun mp_limb_t mpz_getlimbn (mpz_t @var{op}, mp_size_t @var{n})
4142 Return limb number @var{n} from @var{op}. The sign of @var{op} is ignored,
4143 just the absolute value is used. The least significant limb is number 0.
4145 @code{mpz_size} can be used to find how many limbs make up @var{op}.
4146 @code{mpz_getlimbn} returns zero if @var{n} is outside the range 0 to
4147 @code{mpz_size(@var{op})-1}.
4150 @deftypefun size_t mpz_size (mpz_t @var{op})
4151 Return the size of @var{op} measured in number of limbs. If @var{op} is zero,
4152 the returned value will be zero.
4153 @c (@xref{Nomenclature}, for an explanation of the concept @dfn{limb}.)
4158 @node Rational Number Functions, Floating-point Functions, Integer Functions, Top
4159 @comment node-name, next, previous, up
4160 @chapter Rational Number Functions
4161 @cindex Rational number functions
4163 This chapter describes the GMP functions for performing arithmetic on rational
4164 numbers. These functions start with the prefix @code{mpq_}.
4166 Rational numbers are stored in objects of type @code{mpq_t}.
4168 All rational arithmetic functions assume operands have a canonical form, and
4169 canonicalize their result. The canonical from means that the denominator and
4170 the numerator have no common factors, and that the denominator is positive.
4171 Zero has the unique representation 0/1.
4173 Pure assignment functions do not canonicalize the assigned variable. It is
4174 the responsibility of the user to canonicalize the assigned variable before
4175 any arithmetic operations are performed on that variable.
4177 @deftypefun void mpq_canonicalize (mpq_t @var{op})
4178 Remove any factors that are common to the numerator and denominator of
4179 @var{op}, and make the denominator positive.
4183 * Initializing Rationals::
4184 * Rational Conversions::
4185 * Rational Arithmetic::
4186 * Comparing Rationals::
4187 * Applying Integer Functions::
4188 * I/O of Rationals::
4191 @node Initializing Rationals, Rational Conversions, Rational Number Functions, Rational Number Functions
4192 @comment node-name, next, previous, up
4193 @section Initialization and Assignment Functions
4194 @cindex Rational assignment functions
4195 @cindex Assignment functions
4196 @cindex Rational initialization functions
4197 @cindex Initialization functions
4199 @deftypefun void mpq_init (mpq_t @var{x})
4200 Initialize @var{x} and set it to 0/1. Each variable should normally only be
4201 initialized once, or at least cleared out (using the function @code{mpq_clear})
4202 between each initialization.
4205 @deftypefun void mpq_inits (mpq_t @var{x}, ...)
4206 Initialize a NULL-terminated list of @code{mpq_t} variables, and set their
4210 @deftypefun void mpq_clear (mpq_t @var{x})
4211 Free the space occupied by @var{x}. Make sure to call this function for all
4212 @code{mpq_t} variables when you are done with them.
4215 @deftypefun void mpq_clears (mpq_t @var{x}, ...)
4216 Free the space occupied by a NULL-terminated list of @code{mpq_t} variables.
4219 @deftypefun void mpq_set (mpq_t @var{rop}, mpq_t @var{op})
4220 @deftypefunx void mpq_set_z (mpq_t @var{rop}, mpz_t @var{op})
4221 Assign @var{rop} from @var{op}.
4224 @deftypefun void mpq_set_ui (mpq_t @var{rop}, unsigned long int @var{op1}, unsigned long int @var{op2})
4225 @deftypefunx void mpq_set_si (mpq_t @var{rop}, signed long int @var{op1}, unsigned long int @var{op2})
4226 Set the value of @var{rop} to @var{op1}/@var{op2}. Note that if @var{op1} and
4227 @var{op2} have common factors, @var{rop} has to be passed to
4228 @code{mpq_canonicalize} before any operations are performed on @var{rop}.
4231 @deftypefun int mpq_set_str (mpq_t @var{rop}, char *@var{str}, int @var{base})
4232 Set @var{rop} from a null-terminated string @var{str} in the given @var{base}.
4234 The string can be an integer like ``41'' or a fraction like ``41/152''. The
4235 fraction must be in canonical form (@pxref{Rational Number Functions}), or if
4236 not then @code{mpq_canonicalize} must be called.
4238 The numerator and optional denominator are parsed the same as in
4239 @code{mpz_set_str} (@pxref{Assigning Integers}). White space is allowed in
4240 the string, and is simply ignored. The @var{base} can vary from 2 to 62, or
4241 if @var{base} is 0 then the leading characters are used: @code{0x} or @code{0X} for hex,
4242 @code{0b} or @code{0B} for binary,
4243 @code{0} for octal, or decimal otherwise. Note that this is done separately
4244 for the numerator and denominator, so for instance @code{0xEF/100} is 239/100,
4245 whereas @code{0xEF/0x100} is 239/256.
4247 The return value is 0 if the entire string is a valid number, or @minus{}1 if
4251 @deftypefun void mpq_swap (mpq_t @var{rop1}, mpq_t @var{rop2})
4252 Swap the values @var{rop1} and @var{rop2} efficiently.
4257 @node Rational Conversions, Rational Arithmetic, Initializing Rationals, Rational Number Functions
4258 @comment node-name, next, previous, up
4259 @section Conversion Functions
4260 @cindex Rational conversion functions
4261 @cindex Conversion functions
4263 @deftypefun double mpq_get_d (mpq_t @var{op})
4264 Convert @var{op} to a @code{double}, truncating if necessary (ie.@: rounding
4267 If the exponent from the conversion is too big or too small to fit a
4268 @code{double} then the result is system dependent. For too big an infinity is
4269 returned when available. For too small @math{0.0} is normally returned.
4270 Hardware overflow, underflow and denorm traps may or may not occur.
4273 @deftypefun void mpq_set_d (mpq_t @var{rop}, double @var{op})
4274 @deftypefunx void mpq_set_f (mpq_t @var{rop}, mpf_t @var{op})
4275 Set @var{rop} to the value of @var{op}. There is no rounding, this conversion
4279 @deftypefun {char *} mpq_get_str (char *@var{str}, int @var{base}, mpq_t @var{op})
4280 Convert @var{op} to a string of digits in base @var{base}. The base may vary
4281 from 2 to 36. The string will be of the form @samp{num/den}, or if the
4282 denominator is 1 then just @samp{num}.
4284 If @var{str} is @code{NULL}, the result string is allocated using the current
4285 allocation function (@pxref{Custom Allocation}). The block will be
4286 @code{strlen(str)+1} bytes, that being exactly enough for the string and
4289 If @var{str} is not @code{NULL}, it should point to a block of storage large
4290 enough for the result, that being
4293 mpz_sizeinbase (mpq_numref(@var{op}), @var{base})
4294 + mpz_sizeinbase (mpq_denref(@var{op}), @var{base}) + 3
4297 The three extra bytes are for a possible minus sign, possible slash, and the
4300 A pointer to the result string is returned, being either the allocated block,
4301 or the given @var{str}.
4305 @node Rational Arithmetic, Comparing Rationals, Rational Conversions, Rational Number Functions
4306 @comment node-name, next, previous, up
4307 @section Arithmetic Functions
4308 @cindex Rational arithmetic functions
4309 @cindex Arithmetic functions
4311 @deftypefun void mpq_add (mpq_t @var{sum}, mpq_t @var{addend1}, mpq_t @var{addend2})
4312 Set @var{sum} to @var{addend1} + @var{addend2}.
4315 @deftypefun void mpq_sub (mpq_t @var{difference}, mpq_t @var{minuend}, mpq_t @var{subtrahend})
4316 Set @var{difference} to @var{minuend} @minus{} @var{subtrahend}.
4319 @deftypefun void mpq_mul (mpq_t @var{product}, mpq_t @var{multiplier}, mpq_t @var{multiplicand})
4320 Set @var{product} to @math{@var{multiplier} @GMPtimes{} @var{multiplicand}}.
4323 @deftypefun void mpq_mul_2exp (mpq_t @var{rop}, mpq_t @var{op1}, mp_bitcnt_t @var{op2})
4324 Set @var{rop} to @m{@var{op1} \times 2^{op2}, @var{op1} times 2 raised to
4328 @deftypefun void mpq_div (mpq_t @var{quotient}, mpq_t @var{dividend}, mpq_t @var{divisor})
4329 @cindex Division functions
4330 Set @var{quotient} to @var{dividend}/@var{divisor}.
4333 @deftypefun void mpq_div_2exp (mpq_t @var{rop}, mpq_t @var{op1}, mp_bitcnt_t @var{op2})
4334 Set @var{rop} to @m{@var{op1}/2^{op2}, @var{op1} divided by 2 raised to
4338 @deftypefun void mpq_neg (mpq_t @var{negated_operand}, mpq_t @var{operand})
4339 Set @var{negated_operand} to @minus{}@var{operand}.
4342 @deftypefun void mpq_abs (mpq_t @var{rop}, mpq_t @var{op})
4343 Set @var{rop} to the absolute value of @var{op}.
4346 @deftypefun void mpq_inv (mpq_t @var{inverted_number}, mpq_t @var{number})
4347 Set @var{inverted_number} to 1/@var{number}. If the new denominator is
4348 zero, this routine will divide by zero.
4351 @node Comparing Rationals, Applying Integer Functions, Rational Arithmetic, Rational Number Functions
4352 @comment node-name, next, previous, up
4353 @section Comparison Functions
4354 @cindex Rational comparison functions
4355 @cindex Comparison functions
4357 @deftypefun int mpq_cmp (mpq_t @var{op1}, mpq_t @var{op2})
4358 Compare @var{op1} and @var{op2}. Return a positive value if @math{@var{op1} >
4359 @var{op2}}, zero if @math{@var{op1} = @var{op2}}, and a negative value if
4360 @math{@var{op1} < @var{op2}}.
4362 To determine if two rationals are equal, @code{mpq_equal} is faster than
4366 @deftypefn Macro int mpq_cmp_ui (mpq_t @var{op1}, unsigned long int @var{num2}, unsigned long int @var{den2})
4367 @deftypefnx Macro int mpq_cmp_si (mpq_t @var{op1}, long int @var{num2}, unsigned long int @var{den2})
4368 Compare @var{op1} and @var{num2}/@var{den2}. Return a positive value if
4369 @math{@var{op1} > @var{num2}/@var{den2}}, zero if @math{@var{op1} =
4370 @var{num2}/@var{den2}}, and a negative value if @math{@var{op1} <
4371 @var{num2}/@var{den2}}.
4373 @var{num2} and @var{den2} are allowed to have common factors.
4375 These functions are implemented as a macros and evaluate their arguments
4379 @deftypefn Macro int mpq_sgn (mpq_t @var{op})
4381 @cindex Rational sign tests
4382 Return @math{+1} if @math{@var{op} > 0}, 0 if @math{@var{op} = 0}, and
4383 @math{-1} if @math{@var{op} < 0}.
4385 This function is actually implemented as a macro. It evaluates its
4386 arguments multiple times.
4389 @deftypefun int mpq_equal (mpq_t @var{op1}, mpq_t @var{op2})
4390 Return non-zero if @var{op1} and @var{op2} are equal, zero if they are
4391 non-equal. Although @code{mpq_cmp} can be used for the same purpose, this
4392 function is much faster.
4395 @node Applying Integer Functions, I/O of Rationals, Comparing Rationals, Rational Number Functions
4396 @comment node-name, next, previous, up
4397 @section Applying Integer Functions to Rationals
4398 @cindex Rational numerator and denominator
4399 @cindex Numerator and denominator
4401 The set of @code{mpq} functions is quite small. In particular, there are few
4402 functions for either input or output. The following functions give direct
4403 access to the numerator and denominator of an @code{mpq_t}.
4405 Note that if an assignment to the numerator and/or denominator could take an
4406 @code{mpq_t} out of the canonical form described at the start of this chapter
4407 (@pxref{Rational Number Functions}) then @code{mpq_canonicalize} must be
4408 called before any other @code{mpq} functions are applied to that @code{mpq_t}.
4410 @deftypefn Macro mpz_t mpq_numref (mpq_t @var{op})
4411 @deftypefnx Macro mpz_t mpq_denref (mpq_t @var{op})
4412 Return a reference to the numerator and denominator of @var{op}, respectively.
4413 The @code{mpz} functions can be used on the result of these macros.
4416 @deftypefun void mpq_get_num (mpz_t @var{numerator}, mpq_t @var{rational})
4417 @deftypefunx void mpq_get_den (mpz_t @var{denominator}, mpq_t @var{rational})
4418 @deftypefunx void mpq_set_num (mpq_t @var{rational}, mpz_t @var{numerator})
4419 @deftypefunx void mpq_set_den (mpq_t @var{rational}, mpz_t @var{denominator})
4420 Get or set the numerator or denominator of a rational. These functions are
4421 equivalent to calling @code{mpz_set} with an appropriate @code{mpq_numref} or
4422 @code{mpq_denref}. Direct use of @code{mpq_numref} or @code{mpq_denref} is
4423 recommended instead of these functions.
4428 @node I/O of Rationals, , Applying Integer Functions, Rational Number Functions
4429 @comment node-name, next, previous, up
4430 @section Input and Output Functions
4431 @cindex Rational input and output functions
4432 @cindex Input functions
4433 @cindex Output functions
4434 @cindex I/O functions
4436 When using any of these functions, it's a good idea to include @file{stdio.h}
4437 before @file{gmp.h}, since that will allow @file{gmp.h} to define prototypes
4438 for these functions.
4440 Passing a @code{NULL} pointer for a @var{stream} argument to any of these
4441 functions will make them read from @code{stdin} and write to @code{stdout},
4444 @deftypefun size_t mpq_out_str (FILE *@var{stream}, int @var{base}, mpq_t @var{op})
4445 Output @var{op} on stdio stream @var{stream}, as a string of digits in base
4446 @var{base}. The base may vary from 2 to 36. Output is in the form
4447 @samp{num/den} or if the denominator is 1 then just @samp{num}.
4449 Return the number of bytes written, or if an error occurred, return 0.
4452 @deftypefun size_t mpq_inp_str (mpq_t @var{rop}, FILE *@var{stream}, int @var{base})
4453 Read a string of digits from @var{stream} and convert them to a rational in
4454 @var{rop}. Any initial white-space characters are read and discarded. Return
4455 the number of characters read (including white space), or 0 if a rational
4458 The input can be a fraction like @samp{17/63} or just an integer like
4459 @samp{123}. Reading stops at the first character not in this form, and white
4460 space is not permitted within the string. If the input might not be in
4461 canonical form, then @code{mpq_canonicalize} must be called (@pxref{Rational
4464 The @var{base} can be between 2 and 36, or can be 0 in which case the leading
4465 characters of the string determine the base, @samp{0x} or @samp{0X} for
4466 hexadecimal, @samp{0} for octal, or decimal otherwise. The leading characters
4467 are examined separately for the numerator and denominator of a fraction, so
4468 for instance @samp{0x10/11} is @math{16/11}, whereas @samp{0x10/0x11} is
4473 @node Floating-point Functions, Low-level Functions, Rational Number Functions, Top
4474 @comment node-name, next, previous, up
4475 @chapter Floating-point Functions
4476 @cindex Floating-point functions
4477 @cindex Float functions
4478 @cindex User-defined precision
4479 @cindex Precision of floats
4481 GMP floating point numbers are stored in objects of type @code{mpf_t} and
4482 functions operating on them have an @code{mpf_} prefix.
4484 The mantissa of each float has a user-selectable precision, limited only by
4485 available memory. Each variable has its own precision, and that can be
4486 increased or decreased at any time.
4488 The exponent of each float is a fixed precision, one machine word on most
4489 systems. In the current implementation the exponent is a count of limbs, so
4490 for example on a 32-bit system this means a range of roughly
4491 @math{2^@W{-68719476768}} to @math{2^@W{68719476736}}, or on a 64-bit system
4492 this will be greater. Note however @code{mpf_get_str} can only return an
4493 exponent which fits an @code{mp_exp_t} and currently @code{mpf_set_str}
4494 doesn't accept exponents bigger than a @code{long}.
4496 Each variable keeps a size for the mantissa data actually in use. This means
4497 that if a float is exactly represented in only a few bits then only those bits
4498 will be used in a calculation, even if the selected precision is high.
4500 All calculations are performed to the precision of the destination variable.
4501 Each function is defined to calculate with ``infinite precision'' followed by
4502 a truncation to the destination precision, but of course the work done is only
4503 what's needed to determine a result under that definition.
4505 The precision selected for a variable is a minimum value, GMP may increase it
4506 a little to facilitate efficient calculation. Currently this means rounding
4507 up to a whole limb, and then sometimes having a further partial limb,
4508 depending on the high limb of the mantissa. But applications shouldn't be
4509 concerned by such details.
4511 The mantissa in stored in binary, as might be imagined from the fact
4512 precisions are expressed in bits. One consequence of this is that decimal
4513 fractions like @math{0.1} cannot be represented exactly. The same is true of
4514 plain IEEE @code{double} floats. This makes both highly unsuitable for
4515 calculations involving money or other values that should be exact decimal
4516 fractions. (Suitably scaled integers, or perhaps rationals, are better
4519 @code{mpf} functions and variables have no special notion of infinity or
4520 not-a-number, and applications must take care not to overflow the exponent or
4521 results will be unpredictable. This might change in a future release.
4523 Note that the @code{mpf} functions are @emph{not} intended as a smooth
4524 extension to IEEE P754 arithmetic. In particular results obtained on one
4525 computer often differ from the results on a computer with a different word
4529 * Initializing Floats::
4530 * Assigning Floats::
4531 * Simultaneous Float Init & Assign::
4532 * Converting Floats::
4533 * Float Arithmetic::
4534 * Float Comparison::
4536 * Miscellaneous Float Functions::
4539 @node Initializing Floats, Assigning Floats, Floating-point Functions, Floating-point Functions
4540 @comment node-name, next, previous, up
4541 @section Initialization Functions
4542 @cindex Float initialization functions
4543 @cindex Initialization functions
4545 @deftypefun void mpf_set_default_prec (mp_bitcnt_t @var{prec})
4546 Set the default precision to be @strong{at least} @var{prec} bits. All
4547 subsequent calls to @code{mpf_init} will use this precision, but previously
4548 initialized variables are unaffected.
4551 @deftypefun {mp_bitcnt_t} mpf_get_default_prec (void)
4552 Return the default precision actually used.
4555 An @code{mpf_t} object must be initialized before storing the first value in
4556 it. The functions @code{mpf_init} and @code{mpf_init2} are used for that
4559 @deftypefun void mpf_init (mpf_t @var{x})
4560 Initialize @var{x} to 0. Normally, a variable should be initialized once only
4561 or at least be cleared, using @code{mpf_clear}, between initializations. The
4562 precision of @var{x} is undefined unless a default precision has already been
4563 established by a call to @code{mpf_set_default_prec}.
4566 @deftypefun void mpf_init2 (mpf_t @var{x}, mp_bitcnt_t @var{prec})
4567 Initialize @var{x} to 0 and set its precision to be @strong{at least}
4568 @var{prec} bits. Normally, a variable should be initialized once only or at
4569 least be cleared, using @code{mpf_clear}, between initializations.
4572 @deftypefun void mpf_inits (mpf_t @var{x}, ...)
4573 Initialize a NULL-terminated list of @code{mpf_t} variables, and set their
4574 values to 0. The precision of the initialized variables is undefined unless a
4575 default precision has already been established by a call to
4576 @code{mpf_set_default_prec}.
4579 @deftypefun void mpf_clear (mpf_t @var{x})
4580 Free the space occupied by @var{x}. Make sure to call this function for all
4581 @code{mpf_t} variables when you are done with them.
4584 @deftypefun void mpf_clears (mpf_t @var{x}, ...)
4585 Free the space occupied by a NULL-terminated list of @code{mpf_t} variables.
4589 Here is an example on how to initialize floating-point variables:
4593 mpf_init (x); /* use default precision */
4594 mpf_init2 (y, 256); /* precision @emph{at least} 256 bits */
4596 /* Unless the program is about to exit, do ... */
4602 The following three functions are useful for changing the precision during a
4603 calculation. A typical use would be for adjusting the precision gradually in
4604 iterative algorithms like Newton-Raphson, making the computation precision
4605 closely match the actual accurate part of the numbers.
4607 @deftypefun {mp_bitcnt_t} mpf_get_prec (mpf_t @var{op})
4608 Return the current precision of @var{op}, in bits.
4611 @deftypefun void mpf_set_prec (mpf_t @var{rop}, mp_bitcnt_t @var{prec})
4612 Set the precision of @var{rop} to be @strong{at least} @var{prec} bits. The
4613 value in @var{rop} will be truncated to the new precision.
4615 This function requires a call to @code{realloc}, and so should not be used in
4619 @deftypefun void mpf_set_prec_raw (mpf_t @var{rop}, mp_bitcnt_t @var{prec})
4620 Set the precision of @var{rop} to be @strong{at least} @var{prec} bits,
4621 without changing the memory allocated.
4623 @var{prec} must be no more than the allocated precision for @var{rop}, that
4624 being the precision when @var{rop} was initialized, or in the most recent
4625 @code{mpf_set_prec}.
4627 The value in @var{rop} is unchanged, and in particular if it had a higher
4628 precision than @var{prec} it will retain that higher precision. New values
4629 written to @var{rop} will use the new @var{prec}.
4631 Before calling @code{mpf_clear} or the full @code{mpf_set_prec}, another
4632 @code{mpf_set_prec_raw} call must be made to restore @var{rop} to its original
4633 allocated precision. Failing to do so will have unpredictable results.
4635 @code{mpf_get_prec} can be used before @code{mpf_set_prec_raw} to get the
4636 original allocated precision. After @code{mpf_set_prec_raw} it reflects the
4637 @var{prec} value set.
4639 @code{mpf_set_prec_raw} is an efficient way to use an @code{mpf_t} variable at
4640 different precisions during a calculation, perhaps to gradually increase
4641 precision in an iteration, or just to use various different precisions for
4642 different purposes during a calculation.
4647 @node Assigning Floats, Simultaneous Float Init & Assign, Initializing Floats, Floating-point Functions
4648 @comment node-name, next, previous, up
4649 @section Assignment Functions
4650 @cindex Float assignment functions
4651 @cindex Assignment functions
4653 These functions assign new values to already initialized floats
4654 (@pxref{Initializing Floats}).
4656 @deftypefun void mpf_set (mpf_t @var{rop}, mpf_t @var{op})
4657 @deftypefunx void mpf_set_ui (mpf_t @var{rop}, unsigned long int @var{op})
4658 @deftypefunx void mpf_set_si (mpf_t @var{rop}, signed long int @var{op})
4659 @deftypefunx void mpf_set_d (mpf_t @var{rop}, double @var{op})
4660 @deftypefunx void mpf_set_z (mpf_t @var{rop}, mpz_t @var{op})
4661 @deftypefunx void mpf_set_q (mpf_t @var{rop}, mpq_t @var{op})
4662 Set the value of @var{rop} from @var{op}.
4665 @deftypefun int mpf_set_str (mpf_t @var{rop}, char *@var{str}, int @var{base})
4666 Set the value of @var{rop} from the string in @var{str}. The string is of the
4667 form @samp{M@@N} or, if the base is 10 or less, alternatively @samp{MeN}.
4668 @samp{M} is the mantissa and @samp{N} is the exponent. The mantissa is always
4669 in the specified base. The exponent is either in the specified base or, if
4670 @var{base} is negative, in decimal. The decimal point expected is taken from
4671 the current locale, on systems providing @code{localeconv}.
4673 The argument @var{base} may be in the ranges 2 to 62, or @minus{}62 to
4674 @minus{}2. Negative values are used to specify that the exponent is in
4677 For bases up to 36, case is ignored; upper-case and lower-case letters have
4678 the same value; for bases 37 to 62, upper-case letter represent the usual
4679 10..35 while lower-case letter represent 36..61.
4681 Unlike the corresponding @code{mpz} function, the base will not be determined
4682 from the leading characters of the string if @var{base} is 0. This is so that
4683 numbers like @samp{0.23} are not interpreted as octal.
4685 White space is allowed in the string, and is simply ignored. [This is not
4686 really true; white-space is ignored in the beginning of the string and within
4687 the mantissa, but not in other places, such as after a minus sign or in the
4688 exponent. We are considering changing the definition of this function, making
4689 it fail when there is any white-space in the input, since that makes a lot of
4690 sense. Please tell us your opinion about this change. Do you really want it
4691 to accept @nicode{"3 14"} as meaning 314 as it does now?]
4693 This function returns 0 if the entire string is a valid number in base
4694 @var{base}. Otherwise it returns @minus{}1.
4697 @deftypefun void mpf_swap (mpf_t @var{rop1}, mpf_t @var{rop2})
4698 Swap @var{rop1} and @var{rop2} efficiently. Both the values and the
4699 precisions of the two variables are swapped.
4703 @node Simultaneous Float Init & Assign, Converting Floats, Assigning Floats, Floating-point Functions
4704 @comment node-name, next, previous, up
4705 @section Combined Initialization and Assignment Functions
4706 @cindex Float assignment functions
4707 @cindex Assignment functions
4708 @cindex Float initialization functions
4709 @cindex Initialization functions
4711 For convenience, GMP provides a parallel series of initialize-and-set functions
4712 which initialize the output and then store the value there. These functions'
4713 names have the form @code{mpf_init_set@dots{}}
4715 Once the float has been initialized by any of the @code{mpf_init_set@dots{}}
4716 functions, it can be used as the source or destination operand for the ordinary
4717 float functions. Don't use an initialize-and-set function on a variable
4718 already initialized!
4720 @deftypefun void mpf_init_set (mpf_t @var{rop}, mpf_t @var{op})
4721 @deftypefunx void mpf_init_set_ui (mpf_t @var{rop}, unsigned long int @var{op})
4722 @deftypefunx void mpf_init_set_si (mpf_t @var{rop}, signed long int @var{op})
4723 @deftypefunx void mpf_init_set_d (mpf_t @var{rop}, double @var{op})
4724 Initialize @var{rop} and set its value from @var{op}.
4726 The precision of @var{rop} will be taken from the active default precision, as
4727 set by @code{mpf_set_default_prec}.
4730 @deftypefun int mpf_init_set_str (mpf_t @var{rop}, char *@var{str}, int @var{base})
4731 Initialize @var{rop} and set its value from the string in @var{str}. See
4732 @code{mpf_set_str} above for details on the assignment operation.
4734 Note that @var{rop} is initialized even if an error occurs. (I.e., you have to
4735 call @code{mpf_clear} for it.)
4737 The precision of @var{rop} will be taken from the active default precision, as
4738 set by @code{mpf_set_default_prec}.
4742 @node Converting Floats, Float Arithmetic, Simultaneous Float Init & Assign, Floating-point Functions
4743 @comment node-name, next, previous, up
4744 @section Conversion Functions
4745 @cindex Float conversion functions
4746 @cindex Conversion functions
4748 @deftypefun double mpf_get_d (mpf_t @var{op})
4749 Convert @var{op} to a @code{double}, truncating if necessary (ie.@: rounding
4752 If the exponent in @var{op} is too big or too small to fit a @code{double}
4753 then the result is system dependent. For too big an infinity is returned when
4754 available. For too small @math{0.0} is normally returned. Hardware overflow,
4755 underflow and denorm traps may or may not occur.
4758 @deftypefun double mpf_get_d_2exp (signed long int *@var{exp}, mpf_t @var{op})
4759 Convert @var{op} to a @code{double}, truncating if necessary (ie.@: rounding
4760 towards zero), and with an exponent returned separately.
4762 The return value is in the range @math{0.5@le{}@GMPabs{@var{d}}<1} and the
4763 exponent is stored to @code{*@var{exp}}. @m{@var{d} * 2^{exp}, @var{d} *
4764 2^@var{exp}} is the (truncated) @var{op} value. If @var{op} is zero, the
4765 return is @math{0.0} and 0 is stored to @code{*@var{exp}}.
4767 @cindex @code{frexp}
4768 This is similar to the standard C @code{frexp} function (@pxref{Normalization
4769 Functions,,, libc, The GNU C Library Reference Manual}).
4772 @deftypefun long mpf_get_si (mpf_t @var{op})
4773 @deftypefunx {unsigned long} mpf_get_ui (mpf_t @var{op})
4774 Convert @var{op} to a @code{long} or @code{unsigned long}, truncating any
4775 fraction part. If @var{op} is too big for the return type, the result is
4778 See also @code{mpf_fits_slong_p} and @code{mpf_fits_ulong_p}
4779 (@pxref{Miscellaneous Float Functions}).
4782 @deftypefun {char *} mpf_get_str (char *@var{str}, mp_exp_t *@var{expptr}, int @var{base}, size_t @var{n_digits}, mpf_t @var{op})
4783 Convert @var{op} to a string of digits in base @var{base}. The base argument
4784 may vary from 2 to 62 or from @minus{}2 to @minus{}36. Up to @var{n_digits}
4785 digits will be generated. Trailing zeros are not returned. No more digits
4786 than can be accurately represented by @var{op} are ever generated. If
4787 @var{n_digits} is 0 then that accurate maximum number of digits are generated.
4789 For @var{base} in the range 2..36, digits and lower-case letters are used; for
4790 @minus{}2..@minus{}36, digits and upper-case letters are used; for 37..62,
4791 digits, upper-case letters, and lower-case letters (in that significance order)
4794 If @var{str} is @code{NULL}, the result string is allocated using the current
4795 allocation function (@pxref{Custom Allocation}). The block will be
4796 @code{strlen(str)+1} bytes, that being exactly enough for the string and
4799 If @var{str} is not @code{NULL}, it should point to a block of
4800 @math{@var{n_digits} + 2} bytes, that being enough for the mantissa, a
4801 possible minus sign, and a null-terminator. When @var{n_digits} is 0 to get
4802 all significant digits, an application won't be able to know the space
4803 required, and @var{str} should be @code{NULL} in that case.
4805 The generated string is a fraction, with an implicit radix point immediately
4806 to the left of the first digit. The applicable exponent is written through
4807 the @var{expptr} pointer. For example, the number 3.1416 would be returned as
4808 string @nicode{"31416"} and exponent 1.
4810 When @var{op} is zero, an empty string is produced and the exponent returned
4813 A pointer to the result string is returned, being either the allocated block
4814 or the given @var{str}.
4818 @node Float Arithmetic, Float Comparison, Converting Floats, Floating-point Functions
4819 @comment node-name, next, previous, up
4820 @section Arithmetic Functions
4821 @cindex Float arithmetic functions
4822 @cindex Arithmetic functions
4824 @deftypefun void mpf_add (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
4825 @deftypefunx void mpf_add_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
4826 Set @var{rop} to @math{@var{op1} + @var{op2}}.
4829 @deftypefun void mpf_sub (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
4830 @deftypefunx void mpf_ui_sub (mpf_t @var{rop}, unsigned long int @var{op1}, mpf_t @var{op2})
4831 @deftypefunx void mpf_sub_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
4832 Set @var{rop} to @var{op1} @minus{} @var{op2}.
4835 @deftypefun void mpf_mul (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
4836 @deftypefunx void mpf_mul_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
4837 Set @var{rop} to @math{@var{op1} @GMPtimes{} @var{op2}}.
4840 Division is undefined if the divisor is zero, and passing a zero divisor to the
4841 divide functions will make these functions intentionally divide by zero. This
4842 lets the user handle arithmetic exceptions in these functions in the same
4843 manner as other arithmetic exceptions.
4845 @deftypefun void mpf_div (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
4846 @deftypefunx void mpf_ui_div (mpf_t @var{rop}, unsigned long int @var{op1}, mpf_t @var{op2})
4847 @deftypefunx void mpf_div_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
4848 @cindex Division functions
4849 Set @var{rop} to @var{op1}/@var{op2}.
4852 @deftypefun void mpf_sqrt (mpf_t @var{rop}, mpf_t @var{op})
4853 @deftypefunx void mpf_sqrt_ui (mpf_t @var{rop}, unsigned long int @var{op})
4854 @cindex Root extraction functions
4855 Set @var{rop} to @m{\sqrt{@var{op}}, the square root of @var{op}}.
4858 @deftypefun void mpf_pow_ui (mpf_t @var{rop}, mpf_t @var{op1}, unsigned long int @var{op2})
4859 @cindex Exponentiation functions
4860 @cindex Powering functions
4861 Set @var{rop} to @m{@var{op1}^{op2}, @var{op1} raised to the power @var{op2}}.
4864 @deftypefun void mpf_neg (mpf_t @var{rop}, mpf_t @var{op})
4865 Set @var{rop} to @minus{}@var{op}.
4868 @deftypefun void mpf_abs (mpf_t @var{rop}, mpf_t @var{op})
4869 Set @var{rop} to the absolute value of @var{op}.
4872 @deftypefun void mpf_mul_2exp (mpf_t @var{rop}, mpf_t @var{op1}, mp_bitcnt_t @var{op2})
4873 Set @var{rop} to @m{@var{op1} \times 2^{op2}, @var{op1} times 2 raised to
4877 @deftypefun void mpf_div_2exp (mpf_t @var{rop}, mpf_t @var{op1}, mp_bitcnt_t @var{op2})
4878 Set @var{rop} to @m{@var{op1}/2^{op2}, @var{op1} divided by 2 raised to
4882 @node Float Comparison, I/O of Floats, Float Arithmetic, Floating-point Functions
4883 @comment node-name, next, previous, up
4884 @section Comparison Functions
4885 @cindex Float comparison functions
4886 @cindex Comparison functions
4888 @deftypefun int mpf_cmp (mpf_t @var{op1}, mpf_t @var{op2})
4889 @deftypefunx int mpf_cmp_d (mpf_t @var{op1}, double @var{op2})
4890 @deftypefunx int mpf_cmp_ui (mpf_t @var{op1}, unsigned long int @var{op2})
4891 @deftypefunx int mpf_cmp_si (mpf_t @var{op1}, signed long int @var{op2})
4892 Compare @var{op1} and @var{op2}. Return a positive value if @math{@var{op1} >
4893 @var{op2}}, zero if @math{@var{op1} = @var{op2}}, and a negative value if
4894 @math{@var{op1} < @var{op2}}.
4896 @code{mpf_cmp_d} can be called with an infinity, but results are undefined for
4900 @deftypefun int mpf_eq (mpf_t @var{op1}, mpf_t @var{op2}, mp_bitcnt_t op3)
4901 Return non-zero if the first @var{op3} bits of @var{op1} and @var{op2} are
4902 equal, zero otherwise. I.e., test if @var{op1} and @var{op2} are approximately
4905 Caution 1: All version of GMP up to version 4.2.4 compared just whole limbs,
4906 meaning sometimes more than @var{op3} bits, sometimes fewer.
4908 Caution 2: This function will consider XXX11...111 and XX100...000 different,
4909 even if ... is replaced by a semi-infinite number of bits. Such numbers are
4910 really just one ulp off, and should be considered equal.
4913 @deftypefun void mpf_reldiff (mpf_t @var{rop}, mpf_t @var{op1}, mpf_t @var{op2})
4914 Compute the relative difference between @var{op1} and @var{op2} and store the
4915 result in @var{rop}. This is @math{@GMPabs{@var{op1}-@var{op2}}/@var{op1}}.
4918 @deftypefn Macro int mpf_sgn (mpf_t @var{op})
4920 @cindex Float sign tests
4921 Return @math{+1} if @math{@var{op} > 0}, 0 if @math{@var{op} = 0}, and
4922 @math{-1} if @math{@var{op} < 0}.
4924 This function is actually implemented as a macro. It evaluates its arguments
4928 @node I/O of Floats, Miscellaneous Float Functions, Float Comparison, Floating-point Functions
4929 @comment node-name, next, previous, up
4930 @section Input and Output Functions
4931 @cindex Float input and output functions
4932 @cindex Input functions
4933 @cindex Output functions
4934 @cindex I/O functions
4936 Functions that perform input from a stdio stream, and functions that output to
4937 a stdio stream. Passing a @code{NULL} pointer for a @var{stream} argument to
4938 any of these functions will make them read from @code{stdin} and write to
4939 @code{stdout}, respectively.
4941 When using any of these functions, it is a good idea to include @file{stdio.h}
4942 before @file{gmp.h}, since that will allow @file{gmp.h} to define prototypes
4943 for these functions.
4945 @deftypefun size_t mpf_out_str (FILE *@var{stream}, int @var{base}, size_t @var{n_digits}, mpf_t @var{op})
4946 Print @var{op} to @var{stream}, as a string of digits. Return the number of
4947 bytes written, or if an error occurred, return 0.
4949 The mantissa is prefixed with an @samp{0.} and is in the given @var{base},
4950 which may vary from 2 to 62 or from @minus{}2 to @minus{}36. An exponent is
4951 then printed, separated by an @samp{e}, or if the base is greater than 10 then
4952 by an @samp{@@}. The exponent is always in decimal. The decimal point follows
4953 the current locale, on systems providing @code{localeconv}.
4955 For @var{base} in the range 2..36, digits and lower-case letters are used; for
4956 @minus{}2..@minus{}36, digits and upper-case letters are used; for 37..62,
4957 digits, upper-case letters, and lower-case letters (in that significance order)
4960 Up to @var{n_digits} will be printed from the mantissa, except that no more
4961 digits than are accurately representable by @var{op} will be printed.
4962 @var{n_digits} can be 0 to select that accurate maximum.
4965 @deftypefun size_t mpf_inp_str (mpf_t @var{rop}, FILE *@var{stream}, int @var{base})
4966 Read a string in base @var{base} from @var{stream}, and put the read float in
4967 @var{rop}. The string is of the form @samp{M@@N} or, if the base is 10 or
4968 less, alternatively @samp{MeN}. @samp{M} is the mantissa and @samp{N} is the
4969 exponent. The mantissa is always in the specified base. The exponent is
4970 either in the specified base or, if @var{base} is negative, in decimal. The
4971 decimal point expected is taken from the current locale, on systems providing
4974 The argument @var{base} may be in the ranges 2 to 36, or @minus{}36 to
4975 @minus{}2. Negative values are used to specify that the exponent is in
4978 Unlike the corresponding @code{mpz} function, the base will not be determined
4979 from the leading characters of the string if @var{base} is 0. This is so that
4980 numbers like @samp{0.23} are not interpreted as octal.
4982 Return the number of bytes read, or if an error occurred, return 0.
4985 @c @deftypefun void mpf_out_raw (FILE *@var{stream}, mpf_t @var{float})
4986 @c Output @var{float} on stdio stream @var{stream}, in raw binary
4987 @c format. The float is written in a portable format, with 4 bytes of
4988 @c size information, and that many bytes of limbs. Both the size and the
4989 @c limbs are written in decreasing significance order.
4992 @c @deftypefun void mpf_inp_raw (mpf_t @var{float}, FILE *@var{stream})
4993 @c Input from stdio stream @var{stream} in the format written by
4994 @c @code{mpf_out_raw}, and put the result in @var{float}.
4998 @node Miscellaneous Float Functions, , I/O of Floats, Floating-point Functions
4999 @comment node-name, next, previous, up
5000 @section Miscellaneous Functions
5001 @cindex Miscellaneous float functions
5002 @cindex Float miscellaneous functions
5004 @deftypefun void mpf_ceil (mpf_t @var{rop}, mpf_t @var{op})
5005 @deftypefunx void mpf_floor (mpf_t @var{rop}, mpf_t @var{op})
5006 @deftypefunx void mpf_trunc (mpf_t @var{rop}, mpf_t @var{op})
5007 @cindex Rounding functions
5008 @cindex Float rounding functions
5009 Set @var{rop} to @var{op} rounded to an integer. @code{mpf_ceil} rounds to the
5010 next higher integer, @code{mpf_floor} to the next lower, and @code{mpf_trunc}
5011 to the integer towards zero.
5014 @deftypefun int mpf_integer_p (mpf_t @var{op})
5015 Return non-zero if @var{op} is an integer.
5018 @deftypefun int mpf_fits_ulong_p (mpf_t @var{op})
5019 @deftypefunx int mpf_fits_slong_p (mpf_t @var{op})
5020 @deftypefunx int mpf_fits_uint_p (mpf_t @var{op})
5021 @deftypefunx int mpf_fits_sint_p (mpf_t @var{op})
5022 @deftypefunx int mpf_fits_ushort_p (mpf_t @var{op})
5023 @deftypefunx int mpf_fits_sshort_p (mpf_t @var{op})
5024 Return non-zero if @var{op} would fit in the respective C data type, when
5025 truncated to an integer.
5028 @deftypefun void mpf_urandomb (mpf_t @var{rop}, gmp_randstate_t @var{state}, mp_bitcnt_t @var{nbits})
5029 @cindex Random number functions
5030 @cindex Float random number functions
5031 Generate a uniformly distributed random float in @var{rop}, such that @math{0
5032 @le{} @var{rop} < 1}, with @var{nbits} significant bits in the mantissa.
5034 The variable @var{state} must be initialized by calling one of the
5035 @code{gmp_randinit} functions (@ref{Random State Initialization}) before
5036 invoking this function.
5039 @deftypefun void mpf_random2 (mpf_t @var{rop}, mp_size_t @var{max_size}, mp_exp_t @var{exp})
5040 Generate a random float of at most @var{max_size} limbs, with long strings of
5041 zeros and ones in the binary representation. The exponent of the number is in
5042 the interval @minus{}@var{exp} to @var{exp} (in limbs). This function is
5043 useful for testing functions and algorithms, since these kind of random
5044 numbers have proven to be more likely to trigger corner-case bugs. Negative
5045 random numbers are generated when @var{max_size} is negative.
5048 @c @deftypefun size_t mpf_size (mpf_t @var{op})
5049 @c Return the size of @var{op} measured in number of limbs. If @var{op} is
5050 @c zero, the returned value will be zero. (@xref{Nomenclature}, for an
5051 @c explanation of the concept @dfn{limb}.)
5053 @c @strong{This function is obsolete. It will disappear from future GMP
5058 @node Low-level Functions, Random Number Functions, Floating-point Functions, Top
5059 @comment node-name, next, previous, up
5060 @chapter Low-level Functions
5061 @cindex Low-level functions
5063 This chapter describes low-level GMP functions, used to implement the
5064 high-level GMP functions, but also intended for time-critical user code.
5066 These functions start with the prefix @code{mpn_}.
5068 @c 1. Some of these function clobber input operands.
5071 The @code{mpn} functions are designed to be as fast as possible, @strong{not}
5072 to provide a coherent calling interface. The different functions have somewhat
5073 similar interfaces, but there are variations that make them hard to use. These
5074 functions do as little as possible apart from the real multiple precision
5075 computation, so that no time is spent on things that not all callers need.
5077 A source operand is specified by a pointer to the least significant limb and a
5078 limb count. A destination operand is specified by just a pointer. It is the
5079 responsibility of the caller to ensure that the destination has enough space
5080 for storing the result.
5082 With this way of specifying operands, it is possible to perform computations on
5083 subranges of an argument, and store the result into a subrange of a
5086 A common requirement for all functions is that each source area needs at least
5087 one limb. No size argument may be zero. Unless otherwise stated, in-place
5088 operations are allowed where source and destination are the same, but not where
5089 they only partly overlap.
5091 The @code{mpn} functions are the base for the implementation of the
5092 @code{mpz_}, @code{mpf_}, and @code{mpq_} functions.
5094 This example adds the number beginning at @var{s1p} and the number beginning at
5095 @var{s2p} and writes the sum at @var{destp}. All areas have @var{n} limbs.
5098 cy = mpn_add_n (destp, s1p, s2p, n)
5101 It should be noted that the @code{mpn} functions make no attempt to identify
5102 high or low zero limbs on their operands, or other special forms. On random
5103 data such cases will be unlikely and it'd be wasteful for every function to
5104 check every time. An application knowing something about its data can take
5105 steps to trim or perhaps split its calculations.
5107 @c For reference, within gmp mpz_t operands never have high zero limbs, and
5108 @c we rate low zero limbs as unlikely too (or something an application should
5109 @c handle). This is a prime motivation for not stripping zero limbs in say
5112 @c Other applications doing variable-length calculations will quite likely do
5113 @c something similar to mpz. And even if not then it's highly likely zero
5114 @c limb stripping can be done at just a few judicious points, which will be
5115 @c more efficient than having lots of mpn functions checking every time.
5119 In the notation used below, a source operand is identified by the pointer to
5120 the least significant limb, and the limb count in braces. For example,
5121 @{@var{s1p}, @var{s1n}@}.
5123 @deftypefun mp_limb_t mpn_add_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5124 Add @{@var{s1p}, @var{n}@} and @{@var{s2p}, @var{n}@}, and write the @var{n}
5125 least significant limbs of the result to @var{rp}. Return carry, either 0 or
5128 This is the lowest-level function for addition. It is the preferred function
5129 for addition, since it is written in assembly for most CPUs. For addition of
5130 a variable to itself (i.e., @var{s1p} equals @var{s2p}) use @code{mpn_lshift}
5131 with a count of 1 for optimal speed.
5134 @deftypefun mp_limb_t mpn_add_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
5135 Add @{@var{s1p}, @var{n}@} and @var{s2limb}, and write the @var{n} least
5136 significant limbs of the result to @var{rp}. Return carry, either 0 or 1.
5139 @deftypefun mp_limb_t mpn_add (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, const mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
5140 Add @{@var{s1p}, @var{s1n}@} and @{@var{s2p}, @var{s2n}@}, and write the
5141 @var{s1n} least significant limbs of the result to @var{rp}. Return carry,
5144 This function requires that @var{s1n} is greater than or equal to @var{s2n}.
5147 @deftypefun mp_limb_t mpn_sub_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5148 Subtract @{@var{s2p}, @var{n}@} from @{@var{s1p}, @var{n}@}, and write the
5149 @var{n} least significant limbs of the result to @var{rp}. Return borrow,
5152 This is the lowest-level function for subtraction. It is the preferred
5153 function for subtraction, since it is written in assembly for most CPUs.
5156 @deftypefun mp_limb_t mpn_sub_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
5157 Subtract @var{s2limb} from @{@var{s1p}, @var{n}@}, and write the @var{n} least
5158 significant limbs of the result to @var{rp}. Return borrow, either 0 or 1.
5161 @deftypefun mp_limb_t mpn_sub (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, const mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
5162 Subtract @{@var{s2p}, @var{s2n}@} from @{@var{s1p}, @var{s1n}@}, and write the
5163 @var{s1n} least significant limbs of the result to @var{rp}. Return borrow,
5166 This function requires that @var{s1n} is greater than or equal to
5170 @deftypefun void mpn_neg (mp_limb_t *@var{rp}, const mp_limb_t *@var{sp}, mp_size_t @var{n})
5171 Perform the negation of @{@var{sp}, @var{n}@}, and write the result to
5172 @{@var{rp}, @var{n}@}. Return carry-out.
5175 @deftypefun void mpn_mul_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5176 Multiply @{@var{s1p}, @var{n}@} and @{@var{s2p}, @var{n}@}, and write the
5177 2*@var{n}-limb result to @var{rp}.
5179 The destination has to have space for 2*@var{n} limbs, even if the product's
5180 most significant limb is zero. No overlap is permitted between the
5181 destination and either source.
5183 If the two input operands are the same, use @code{mpn_sqr}.
5186 @deftypefun mp_limb_t mpn_mul (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, const mp_limb_t *@var{s2p}, mp_size_t @var{s2n})
5187 Multiply @{@var{s1p}, @var{s1n}@} and @{@var{s2p}, @var{s2n}@}, and write the
5188 (@var{s1n}+@var{s2n})-limb result to @var{rp}. Return the most significant
5191 The destination has to have space for @var{s1n} + @var{s2n} limbs, even if the
5192 product's most significant limb is zero. No overlap is permitted between the
5193 destination and either source.
5195 This function requires that @var{s1n} is greater than or equal to @var{s2n}.
5198 @deftypefun void mpn_sqr (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n})
5199 Compute the square of @{@var{s1p}, @var{n}@} and write the 2*@var{n}-limb
5202 The destination has to have space for 2*@var{n} limbs, even if the result's
5203 most significant limb is zero. No overlap is permitted between the
5204 destination and the source.
5207 @deftypefun mp_limb_t mpn_mul_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
5208 Multiply @{@var{s1p}, @var{n}@} by @var{s2limb}, and write the @var{n} least
5209 significant limbs of the product to @var{rp}. Return the most significant
5210 limb of the product. @{@var{s1p}, @var{n}@} and @{@var{rp}, @var{n}@} are
5211 allowed to overlap provided @math{@var{rp} @le{} @var{s1p}}.
5213 This is a low-level function that is a building block for general
5214 multiplication as well as other operations in GMP@. It is written in assembly
5217 Don't call this function if @var{s2limb} is a power of 2; use @code{mpn_lshift}
5218 with a count equal to the logarithm of @var{s2limb} instead, for optimal speed.
5221 @deftypefun mp_limb_t mpn_addmul_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
5222 Multiply @{@var{s1p}, @var{n}@} and @var{s2limb}, and add the @var{n} least
5223 significant limbs of the product to @{@var{rp}, @var{n}@} and write the result
5224 to @var{rp}. Return the most significant limb of the product, plus carry-out
5227 This is a low-level function that is a building block for general
5228 multiplication as well as other operations in GMP@. It is written in assembly
5232 @deftypefun mp_limb_t mpn_submul_1 (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n}, mp_limb_t @var{s2limb})
5233 Multiply @{@var{s1p}, @var{n}@} and @var{s2limb}, and subtract the @var{n}
5234 least significant limbs of the product from @{@var{rp}, @var{n}@} and write the
5235 result to @var{rp}. Return the most significant limb of the product, plus
5236 borrow-out from the subtraction.
5238 This is a low-level function that is a building block for general
5239 multiplication and division as well as other operations in GMP@. It is written
5240 in assembly for most CPUs.
5243 @deftypefun void mpn_tdiv_qr (mp_limb_t *@var{qp}, mp_limb_t *@var{rp}, mp_size_t @var{qxn}, const mp_limb_t *@var{np}, mp_size_t @var{nn}, const mp_limb_t *@var{dp}, mp_size_t @var{dn})
5244 Divide @{@var{np}, @var{nn}@} by @{@var{dp}, @var{dn}@} and put the quotient
5245 at @{@var{qp}, @var{nn}@minus{}@var{dn}+1@} and the remainder at @{@var{rp},
5246 @var{dn}@}. The quotient is rounded towards 0.
5248 No overlap is permitted between arguments, except that @var{np} might equal
5249 @var{rp}. The dividend size @var{nn} must be greater than or equal to divisor
5250 size @var{dn}. The most significant limb of the divisor must be non-zero. The
5251 @var{qxn} operand must be zero.
5254 @deftypefun mp_limb_t mpn_divrem (mp_limb_t *@var{r1p}, mp_size_t @var{qxn}, mp_limb_t *@var{rs2p}, mp_size_t @var{rs2n}, const mp_limb_t *@var{s3p}, mp_size_t @var{s3n})
5255 [This function is obsolete. Please call @code{mpn_tdiv_qr} instead for best
5258 Divide @{@var{rs2p}, @var{rs2n}@} by @{@var{s3p}, @var{s3n}@}, and write the
5259 quotient at @var{r1p}, with the exception of the most significant limb, which
5260 is returned. The remainder replaces the dividend at @var{rs2p}; it will be
5261 @var{s3n} limbs long (i.e., as many limbs as the divisor).
5263 In addition to an integer quotient, @var{qxn} fraction limbs are developed, and
5264 stored after the integral limbs. For most usages, @var{qxn} will be zero.
5266 It is required that @var{rs2n} is greater than or equal to @var{s3n}. It is
5267 required that the most significant bit of the divisor is set.
5269 If the quotient is not needed, pass @var{rs2p} + @var{s3n} as @var{r1p}. Aside
5270 from that special case, no overlap between arguments is permitted.
5272 Return the most significant limb of the quotient, either 0 or 1.
5274 The area at @var{r1p} needs to be @var{rs2n} @minus{} @var{s3n} + @var{qxn}
5278 @deftypefn Function mp_limb_t mpn_divrem_1 (mp_limb_t *@var{r1p}, mp_size_t @var{qxn}, @w{mp_limb_t *@var{s2p}}, mp_size_t @var{s2n}, mp_limb_t @var{s3limb})
5279 @deftypefnx Macro mp_limb_t mpn_divmod_1 (mp_limb_t *@var{r1p}, mp_limb_t *@var{s2p}, @w{mp_size_t @var{s2n}}, @w{mp_limb_t @var{s3limb}})
5280 Divide @{@var{s2p}, @var{s2n}@} by @var{s3limb}, and write the quotient at
5281 @var{r1p}. Return the remainder.
5283 The integer quotient is written to @{@var{r1p}+@var{qxn}, @var{s2n}@} and in
5284 addition @var{qxn} fraction limbs are developed and written to @{@var{r1p},
5285 @var{qxn}@}. Either or both @var{s2n} and @var{qxn} can be zero. For most
5286 usages, @var{qxn} will be zero.
5288 @code{mpn_divmod_1} exists for upward source compatibility and is simply a
5289 macro calling @code{mpn_divrem_1} with a @var{qxn} of 0.
5291 The areas at @var{r1p} and @var{s2p} have to be identical or completely
5292 separate, not partially overlapping.
5295 @deftypefun mp_limb_t mpn_divmod (mp_limb_t *@var{r1p}, mp_limb_t *@var{rs2p}, mp_size_t @var{rs2n}, const mp_limb_t *@var{s3p}, mp_size_t @var{s3n})
5296 [This function is obsolete. Please call @code{mpn_tdiv_qr} instead for best
5300 @deftypefn Macro mp_limb_t mpn_divexact_by3 (mp_limb_t *@var{rp}, mp_limb_t *@var{sp}, @w{mp_size_t @var{n}})
5301 @deftypefnx Function mp_limb_t mpn_divexact_by3c (mp_limb_t *@var{rp}, mp_limb_t *@var{sp}, @w{mp_size_t @var{n}}, mp_limb_t @var{carry})
5302 Divide @{@var{sp}, @var{n}@} by 3, expecting it to divide exactly, and writing
5303 the result to @{@var{rp}, @var{n}@}. If 3 divides exactly, the return value is
5304 zero and the result is the quotient. If not, the return value is non-zero and
5305 the result won't be anything useful.
5307 @code{mpn_divexact_by3c} takes an initial carry parameter, which can be the
5308 return value from a previous call, so a large calculation can be done piece by
5309 piece from low to high. @code{mpn_divexact_by3} is simply a macro calling
5310 @code{mpn_divexact_by3c} with a 0 carry parameter.
5312 These routines use a multiply-by-inverse and will be faster than
5313 @code{mpn_divrem_1} on CPUs with fast multiplication but slow division.
5315 The source @math{a}, result @math{q}, size @math{n}, initial carry @math{i},
5316 and return value @math{c} satisfy @m{cb^n+a-i=3q, c*b^n + a-i = 3*q}, where
5317 @m{b=2\GMPraise{@code{GMP\_NUMB\_BITS}}, b=2^GMP_NUMB_BITS}. The
5318 return @math{c} is always 0, 1 or 2, and the initial carry @math{i} must also
5319 be 0, 1 or 2 (these are both borrows really). When @math{c=0} clearly
5320 @math{q=(a-i)/3}. When @m{c \neq 0, c!=0}, the remainder @math{(a-i) @bmod{}
5321 3} is given by @math{3-c}, because @math{b @equiv{} 1 @bmod{} 3} (when
5322 @code{mp_bits_per_limb} is even, which is always so currently).
5325 @deftypefun mp_limb_t mpn_mod_1 (mp_limb_t *@var{s1p}, mp_size_t @var{s1n}, mp_limb_t @var{s2limb})
5326 Divide @{@var{s1p}, @var{s1n}@} by @var{s2limb}, and return the remainder.
5327 @var{s1n} can be zero.
5330 @deftypefun mp_limb_t mpn_lshift (mp_limb_t *@var{rp}, const mp_limb_t *@var{sp}, mp_size_t @var{n}, unsigned int @var{count})
5331 Shift @{@var{sp}, @var{n}@} left by @var{count} bits, and write the result to
5332 @{@var{rp}, @var{n}@}. The bits shifted out at the left are returned in the
5333 least significant @var{count} bits of the return value (the rest of the return
5336 @var{count} must be in the range 1 to @nicode{mp_bits_per_limb}@minus{}1. The
5337 regions @{@var{sp}, @var{n}@} and @{@var{rp}, @var{n}@} may overlap, provided
5338 @math{@var{rp} @ge{} @var{sp}}.
5340 This function is written in assembly for most CPUs.
5343 @deftypefun mp_limb_t mpn_rshift (mp_limb_t *@var{rp}, const mp_limb_t *@var{sp}, mp_size_t @var{n}, unsigned int @var{count})
5344 Shift @{@var{sp}, @var{n}@} right by @var{count} bits, and write the result to
5345 @{@var{rp}, @var{n}@}. The bits shifted out at the right are returned in the
5346 most significant @var{count} bits of the return value (the rest of the return
5349 @var{count} must be in the range 1 to @nicode{mp_bits_per_limb}@minus{}1. The
5350 regions @{@var{sp}, @var{n}@} and @{@var{rp}, @var{n}@} may overlap, provided
5351 @math{@var{rp} @le{} @var{sp}}.
5353 This function is written in assembly for most CPUs.
5356 @deftypefun int mpn_cmp (const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5357 Compare @{@var{s1p}, @var{n}@} and @{@var{s2p}, @var{n}@} and return a
5358 positive value if @math{@var{s1} > @var{s2}}, 0 if they are equal, or a
5359 negative value if @math{@var{s1} < @var{s2}}.
5362 @deftypefun mp_size_t mpn_gcd (mp_limb_t *@var{rp}, mp_limb_t *@var{xp}, mp_size_t @var{xn}, mp_limb_t *@var{yp}, mp_size_t @var{yn})
5363 Set @{@var{rp}, @var{retval}@} to the greatest common divisor of @{@var{xp},
5364 @var{xn}@} and @{@var{yp}, @var{yn}@}. The result can be up to @var{yn} limbs,
5365 the return value is the actual number produced. Both source operands are
5368 @{@var{xp}, @var{xn}@} must have at least as many bits as @{@var{yp},
5369 @var{yn}@}. @{@var{yp}, @var{yn}@} must be odd. Both operands must have
5370 non-zero most significant limbs. No overlap is permitted between @{@var{xp},
5371 @var{xn}@} and @{@var{yp}, @var{yn}@}.
5374 @deftypefun mp_limb_t mpn_gcd_1 (const mp_limb_t *@var{xp}, mp_size_t @var{xn}, mp_limb_t @var{ylimb})
5375 Return the greatest common divisor of @{@var{xp}, @var{xn}@} and @var{ylimb}.
5376 Both operands must be non-zero.
5379 @deftypefun mp_size_t mpn_gcdext (mp_limb_t *@var{gp}, mp_limb_t *@var{sp}, mp_size_t *@var{sn}, mp_limb_t *@var{xp}, mp_size_t @var{xn}, mp_limb_t *@var{yp}, mp_size_t @var{yn})
5380 Let @m{U,@var{U}} be defined by @{@var{xp}, @var{xn}@} and let @m{V,@var{V}} be
5381 defined by @{@var{yp}, @var{yn}@}.
5383 Compute the greatest common divisor @math{G} of @math{U} and @math{V}. Compute
5384 a cofactor @math{S} such that @math{G = US + VT}. The second cofactor @var{T}
5385 is not computed but can easily be obtained from @m{(G - US) / V, (@var{G} -
5386 @var{U}*@var{S}) / @var{V}} (the division will be exact). It is required that
5389 @math{S} satisfies @math{S = 1} or @math{@GMPabs{S} < V / (2 G)}. @math{S =
5390 0} if and only if @math{V} divides @math{U} (i.e., @math{G = V}).
5392 Store @math{G} at @var{gp} and let the return value define its limb count.
5393 Store @math{S} at @var{sp} and let |*@var{sn}| define its limb count. @math{S}
5394 can be negative; when this happens *@var{sn} will be negative. The areas at
5395 @var{gp} and @var{sp} should each have room for @math{@var{xn}+1} limbs.
5397 The areas @{@var{xp}, @math{@var{xn}+1}@} and @{@var{yp}, @math{@var{yn}+1}@}
5398 are destroyed (i.e.@: the input operands plus an extra limb past the end of
5401 Compatibility note: GMP 4.3.0 and 4.3.1 defined @math{S} less strictly.
5402 Earlier as well as later GMP releases define @math{S} as described here.
5405 @deftypefun mp_size_t mpn_sqrtrem (mp_limb_t *@var{r1p}, mp_limb_t *@var{r2p}, const mp_limb_t *@var{sp}, mp_size_t @var{n})
5406 Compute the square root of @{@var{sp}, @var{n}@} and put the result at
5407 @{@var{r1p}, @math{@GMPceil{@var{n}/2}}@} and the remainder at @{@var{r2p},
5408 @var{retval}@}. @var{r2p} needs space for @var{n} limbs, but the return value
5409 indicates how many are produced.
5411 The most significant limb of @{@var{sp}, @var{n}@} must be non-zero. The
5412 areas @{@var{r1p}, @math{@GMPceil{@var{n}/2}}@} and @{@var{sp}, @var{n}@} must
5413 be completely separate. The areas @{@var{r2p}, @var{n}@} and @{@var{sp},
5414 @var{n}@} must be either identical or completely separate.
5416 If the remainder is not wanted then @var{r2p} can be @code{NULL}, and in this
5417 case the return value is zero or non-zero according to whether the remainder
5418 would have been zero or non-zero.
5420 A return value of zero indicates a perfect square. See also
5421 @code{mpz_perfect_square_p}.
5424 @deftypefun mp_size_t mpn_get_str (unsigned char *@var{str}, int @var{base}, mp_limb_t *@var{s1p}, mp_size_t @var{s1n})
5425 Convert @{@var{s1p}, @var{s1n}@} to a raw unsigned char array at @var{str} in
5426 base @var{base}, and return the number of characters produced. There may be
5427 leading zeros in the string. The string is not in ASCII; to convert it to
5428 printable format, add the ASCII codes for @samp{0} or @samp{A}, depending on
5429 the base and range. @var{base} can vary from 2 to 256.
5431 The most significant limb of the input @{@var{s1p}, @var{s1n}@} must be
5432 non-zero. The input @{@var{s1p}, @var{s1n}@} is clobbered, except when
5433 @var{base} is a power of 2, in which case it's unchanged.
5435 The area at @var{str} has to have space for the largest possible number
5436 represented by a @var{s1n} long limb array, plus one extra character.
5439 @deftypefun mp_size_t mpn_set_str (mp_limb_t *@var{rp}, const unsigned char *@var{str}, size_t @var{strsize}, int @var{base})
5440 Convert bytes @{@var{str},@var{strsize}@} in the given @var{base} to limbs at
5443 @math{@var{str}[0]} is the most significant byte and
5444 @math{@var{str}[@var{strsize}-1]} is the least significant. Each byte should
5445 be a value in the range 0 to @math{@var{base}-1}, not an ASCII character.
5446 @var{base} can vary from 2 to 256.
5448 The return value is the number of limbs written to @var{rp}. If the most
5449 significant input byte is non-zero then the high limb at @var{rp} will be
5450 non-zero, and only that exact number of limbs will be required there.
5452 If the most significant input byte is zero then there may be high zero limbs
5453 written to @var{rp} and included in the return value.
5455 @var{strsize} must be at least 1, and no overlap is permitted between
5456 @{@var{str},@var{strsize}@} and the result at @var{rp}.
5459 @deftypefun {mp_bitcnt_t} mpn_scan0 (const mp_limb_t *@var{s1p}, mp_bitcnt_t @var{bit})
5460 Scan @var{s1p} from bit position @var{bit} for the next clear bit.
5462 It is required that there be a clear bit within the area at @var{s1p} at or
5463 beyond bit position @var{bit}, so that the function has something to return.
5466 @deftypefun {mp_bitcnt_t} mpn_scan1 (const mp_limb_t *@var{s1p}, mp_bitcnt_t @var{bit})
5467 Scan @var{s1p} from bit position @var{bit} for the next set bit.
5469 It is required that there be a set bit within the area at @var{s1p} at or
5470 beyond bit position @var{bit}, so that the function has something to return.
5473 @deftypefun void mpn_random (mp_limb_t *@var{r1p}, mp_size_t @var{r1n})
5474 @deftypefunx void mpn_random2 (mp_limb_t *@var{r1p}, mp_size_t @var{r1n})
5475 Generate a random number of length @var{r1n} and store it at @var{r1p}. The
5476 most significant limb is always non-zero. @code{mpn_random} generates
5477 uniformly distributed limb data, @code{mpn_random2} generates long strings of
5478 zeros and ones in the binary representation.
5480 @code{mpn_random2} is intended for testing the correctness of the @code{mpn}
5484 @deftypefun {mp_bitcnt_t} mpn_popcount (const mp_limb_t *@var{s1p}, mp_size_t @var{n})
5485 Count the number of set bits in @{@var{s1p}, @var{n}@}.
5488 @deftypefun {mp_bitcnt_t} mpn_hamdist (const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5489 Compute the hamming distance between @{@var{s1p}, @var{n}@} and @{@var{s2p},
5490 @var{n}@}, which is the number of bit positions where the two operands have
5491 different bit values.
5494 @deftypefun int mpn_perfect_square_p (const mp_limb_t *@var{s1p}, mp_size_t @var{n})
5495 Return non-zero iff @{@var{s1p}, @var{n}@} is a perfect square.
5498 @deftypefun void mpn_and_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5499 Perform the bitwise logical and of @{@var{s1p}, @var{n}@} and @{@var{s2p},
5500 @var{n}@}, and write the result to @{@var{rp}, @var{n}@}.
5503 @deftypefun void mpn_ior_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5504 Perform the bitwise logical inclusive or of @{@var{s1p}, @var{n}@} and
5505 @{@var{s2p}, @var{n}@}, and write the result to @{@var{rp}, @var{n}@}.
5508 @deftypefun void mpn_xor_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5509 Perform the bitwise logical exclusive or of @{@var{s1p}, @var{n}@} and
5510 @{@var{s2p}, @var{n}@}, and write the result to @{@var{rp}, @var{n}@}.
5513 @deftypefun void mpn_andn_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5514 Perform the bitwise logical and of @{@var{s1p}, @var{n}@} and the bitwise
5515 complement of @{@var{s2p}, @var{n}@}, and write the result to @{@var{rp}, @var{n}@}.
5518 @deftypefun void mpn_iorn_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5519 Perform the bitwise logical inclusive or of @{@var{s1p}, @var{n}@} and the bitwise
5520 complement of @{@var{s2p}, @var{n}@}, and write the result to @{@var{rp}, @var{n}@}.
5523 @deftypefun void mpn_nand_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5524 Perform the bitwise logical and of @{@var{s1p}, @var{n}@} and @{@var{s2p},
5525 @var{n}@}, and write the bitwise complement of the result to @{@var{rp}, @var{n}@}.
5528 @deftypefun void mpn_nior_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5529 Perform the bitwise logical inclusive or of @{@var{s1p}, @var{n}@} and
5530 @{@var{s2p}, @var{n}@}, and write the bitwise complement of the result to
5531 @{@var{rp}, @var{n}@}.
5534 @deftypefun void mpn_xnor_n (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, const mp_limb_t *@var{s2p}, mp_size_t @var{n})
5535 Perform the bitwise logical exclusive or of @{@var{s1p}, @var{n}@} and
5536 @{@var{s2p}, @var{n}@}, and write the bitwise complement of the result to
5537 @{@var{rp}, @var{n}@}.
5540 @deftypefun void mpn_com (mp_limb_t *@var{rp}, const mp_limb_t *@var{sp}, mp_size_t @var{n})
5541 Perform the bitwise complement of @{@var{sp}, @var{n}@}, and write the result
5542 to @{@var{rp}, @var{n}@}.
5545 @deftypefun void mpn_copyi (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n})
5546 Copy from @{@var{s1p}, @var{n}@} to @{@var{rp}, @var{n}@}, increasingly.
5549 @deftypefun void mpn_copyd (mp_limb_t *@var{rp}, const mp_limb_t *@var{s1p}, mp_size_t @var{n})
5550 Copy from @{@var{s1p}, @var{n}@} to @{@var{rp}, @var{n}@}, decreasingly.
5553 @deftypefun void mpn_zero (mp_limb_t *@var{rp}, mp_size_t @var{n})
5554 Zero @{@var{rp}, @var{n}@}.
5561 @strong{Everything in this section is highly experimental and may disappear or
5562 be subject to incompatible changes in a future version of GMP.}
5564 Nails are an experimental feature whereby a few bits are left unused at the
5565 top of each @code{mp_limb_t}. This can significantly improve carry handling
5568 All the @code{mpn} functions accepting limb data will expect the nail bits to
5569 be zero on entry, and will return data with the nails similarly all zero.
5570 This applies both to limb vectors and to single limb arguments.
5572 Nails can be enabled by configuring with @samp{--enable-nails}. By default
5573 the number of bits will be chosen according to what suits the host processor,
5574 but a particular number can be selected with @samp{--enable-nails=N}.
5576 At the mpn level, a nail build is neither source nor binary compatible with a
5577 non-nail build, strictly speaking. But programs acting on limbs only through
5578 the mpn functions are likely to work equally well with either build, and
5579 judicious use of the definitions below should make any program compatible with
5580 either build, at the source level.
5582 For the higher level routines, meaning @code{mpz} etc, a nail build should be
5583 fully source and binary compatible with a non-nail build.
5585 @defmac GMP_NAIL_BITS
5586 @defmacx GMP_NUMB_BITS
5587 @defmacx GMP_LIMB_BITS
5588 @code{GMP_NAIL_BITS} is the number of nail bits, or 0 when nails are not in
5589 use. @code{GMP_NUMB_BITS} is the number of data bits in a limb.
5590 @code{GMP_LIMB_BITS} is the total number of bits in an @code{mp_limb_t}. In
5594 GMP_LIMB_BITS == GMP_NAIL_BITS + GMP_NUMB_BITS
5598 @defmac GMP_NAIL_MASK
5599 @defmacx GMP_NUMB_MASK
5600 Bit masks for the nail and number parts of a limb. @code{GMP_NAIL_MASK} is 0
5601 when nails are not in use.
5603 @code{GMP_NAIL_MASK} is not often needed, since the nail part can be obtained
5604 with @code{x >> GMP_NUMB_BITS}, and that means one less large constant, which
5605 can help various RISC chips.
5608 @defmac GMP_NUMB_MAX
5609 The maximum value that can be stored in the number part of a limb. This is
5610 the same as @code{GMP_NUMB_MASK}, but can be used for clarity when doing
5611 comparisons rather than bit-wise operations.
5614 The term ``nails'' comes from finger or toe nails, which are at the ends of a
5615 limb (arm or leg). ``numb'' is short for number, but is also how the
5616 developers felt after trying for a long time to come up with sensible names
5619 In the future (the distant future most likely) a non-zero nail might be
5620 permitted, giving non-unique representations for numbers in a limb vector.
5621 This would help vector processors since carries would only ever need to
5622 propagate one or two limbs.
5625 @node Random Number Functions, Formatted Output, Low-level Functions, Top
5626 @chapter Random Number Functions
5627 @cindex Random number functions
5629 Sequences of pseudo-random numbers in GMP are generated using a variable of
5630 type @code{gmp_randstate_t}, which holds an algorithm selection and a current
5631 state. Such a variable must be initialized by a call to one of the
5632 @code{gmp_randinit} functions, and can be seeded with one of the
5633 @code{gmp_randseed} functions.
5635 The functions actually generating random numbers are described in @ref{Integer
5636 Random Numbers}, and @ref{Miscellaneous Float Functions}.
5638 The older style random number functions don't accept a @code{gmp_randstate_t}
5639 parameter but instead share a global variable of that type. They use a
5640 default algorithm and are currently not seeded (though perhaps that will
5641 change in the future). The new functions accepting a @code{gmp_randstate_t}
5642 are recommended for applications that care about randomness.
5645 * Random State Initialization::
5646 * Random State Seeding::
5647 * Random State Miscellaneous::
5650 @node Random State Initialization, Random State Seeding, Random Number Functions, Random Number Functions
5651 @section Random State Initialization
5652 @cindex Random number state
5653 @cindex Initialization functions
5655 @deftypefun void gmp_randinit_default (gmp_randstate_t @var{state})
5656 Initialize @var{state} with a default algorithm. This will be a compromise
5657 between speed and randomness, and is recommended for applications with no
5658 special requirements. Currently this is @code{gmp_randinit_mt}.
5661 @deftypefun void gmp_randinit_mt (gmp_randstate_t @var{state})
5662 @cindex Mersenne twister random numbers
5663 Initialize @var{state} for a Mersenne Twister algorithm. This algorithm is
5664 fast and has good randomness properties.
5667 @deftypefun void gmp_randinit_lc_2exp (gmp_randstate_t @var{state}, mpz_t @var{a}, @w{unsigned long @var{c}}, @w{mp_bitcnt_t @var{m2exp}})
5668 @cindex Linear congruential random numbers
5669 Initialize @var{state} with a linear congruential algorithm @m{X = (@var{a}X +
5670 @var{c}) @bmod 2^{m2exp}, X = (@var{a}*X + @var{c}) mod 2^@var{m2exp}}.
5672 The low bits of @math{X} in this algorithm are not very random. The least
5673 significant bit will have a period no more than 2, and the second bit no more
5674 than 4, etc. For this reason only the high half of each @math{X} is actually
5677 When a random number of more than @math{@var{m2exp}/2} bits is to be
5678 generated, multiple iterations of the recurrence are used and the results
5682 @deftypefun int gmp_randinit_lc_2exp_size (gmp_randstate_t @var{state}, mp_bitcnt_t @var{size})
5683 @cindex Linear congruential random numbers
5684 Initialize @var{state} for a linear congruential algorithm as per
5685 @code{gmp_randinit_lc_2exp}. @var{a}, @var{c} and @var{m2exp} are selected
5686 from a table, chosen so that @var{size} bits (or more) of each @math{X} will
5687 be used, ie.@: @math{@var{m2exp}/2 @ge{} @var{size}}.
5689 If successful the return value is non-zero. If @var{size} is bigger than the
5690 table data provides then the return value is zero. The maximum @var{size}
5691 currently supported is 128.
5694 @deftypefun void gmp_randinit_set (gmp_randstate_t @var{rop}, gmp_randstate_t @var{op})
5695 Initialize @var{rop} with a copy of the algorithm and state from @var{op}.
5698 @c Although gmp_randinit, gmp_errno and related constants are obsolete, we
5699 @c still put @findex entries for them, since they're still documented and
5700 @c someone might be looking them up when perusing old application code.
5702 @deftypefun void gmp_randinit (gmp_randstate_t @var{state}, @w{gmp_randalg_t @var{alg}}, @dots{})
5703 @strong{This function is obsolete.}
5705 @findex GMP_RAND_ALG_LC
5706 @findex GMP_RAND_ALG_DEFAULT
5707 Initialize @var{state} with an algorithm selected by @var{alg}. The only
5708 choice is @code{GMP_RAND_ALG_LC}, which is @code{gmp_randinit_lc_2exp_size}
5709 described above. A third parameter of type @code{unsigned long} is required,
5710 this is the @var{size} for that function. @code{GMP_RAND_ALG_DEFAULT} or 0
5711 are the same as @code{GMP_RAND_ALG_LC}.
5713 @c For reference, this is the only place gmp_errno has been documented, and
5714 @c due to being non thread safe we won't be adding to it's uses.
5716 @findex GMP_ERROR_UNSUPPORTED_ARGUMENT
5717 @findex GMP_ERROR_INVALID_ARGUMENT
5718 @code{gmp_randinit} sets bits in the global variable @code{gmp_errno} to
5719 indicate an error. @code{GMP_ERROR_UNSUPPORTED_ARGUMENT} if @var{alg} is
5720 unsupported, or @code{GMP_ERROR_INVALID_ARGUMENT} if the @var{size} parameter
5721 is too big. It may be noted this error reporting is not thread safe (a good
5722 reason to use @code{gmp_randinit_lc_2exp_size} instead).
5725 @deftypefun void gmp_randclear (gmp_randstate_t @var{state})
5726 Free all memory occupied by @var{state}.
5730 @node Random State Seeding, Random State Miscellaneous, Random State Initialization, Random Number Functions
5731 @section Random State Seeding
5732 @cindex Random number seeding
5733 @cindex Seeding random numbers
5735 @deftypefun void gmp_randseed (gmp_randstate_t @var{state}, mpz_t @var{seed})
5736 @deftypefunx void gmp_randseed_ui (gmp_randstate_t @var{state}, @w{unsigned long int @var{seed}})
5737 Set an initial seed value into @var{state}.
5739 The size of a seed determines how many different sequences of random numbers
5740 that it's possible to generate. The ``quality'' of the seed is the randomness
5741 of a given seed compared to the previous seed used, and this affects the
5742 randomness of separate number sequences. The method for choosing a seed is
5743 critical if the generated numbers are to be used for important applications,
5744 such as generating cryptographic keys.
5746 Traditionally the system time has been used to seed, but care needs to be
5747 taken with this. If an application seeds often and the resolution of the
5748 system clock is low, then the same sequence of numbers might be repeated.
5749 Also, the system time is quite easy to guess, so if unpredictability is
5750 required then it should definitely not be the only source for the seed value.
5751 On some systems there's a special device @file{/dev/random} which provides
5752 random data better suited for use as a seed.
5756 @node Random State Miscellaneous, , Random State Seeding, Random Number Functions
5757 @section Random State Miscellaneous
5759 @deftypefun {unsigned long} gmp_urandomb_ui (gmp_randstate_t @var{state}, unsigned long @var{n})
5760 Return a uniformly distributed random number of @var{n} bits, ie.@: in the
5761 range 0 to @m{2^n-1,2^@var{n}-1} inclusive. @var{n} must be less than or
5762 equal to the number of bits in an @code{unsigned long}.
5765 @deftypefun {unsigned long} gmp_urandomm_ui (gmp_randstate_t @var{state}, unsigned long @var{n})
5766 Return a uniformly distributed random number in the range 0 to
5767 @math{@var{n}-1}, inclusive.
5771 @node Formatted Output, Formatted Input, Random Number Functions, Top
5772 @chapter Formatted Output
5773 @cindex Formatted output
5774 @cindex @code{printf} formatted output
5777 * Formatted Output Strings::
5778 * Formatted Output Functions::
5779 * C++ Formatted Output::
5782 @node Formatted Output Strings, Formatted Output Functions, Formatted Output, Formatted Output
5783 @section Format Strings
5785 @code{gmp_printf} and friends accept format strings similar to the standard C
5786 @code{printf} (@pxref{Formatted Output,, Formatted Output, libc, The GNU C
5787 Library Reference Manual}). A format specification is of the form
5790 % [flags] [width] [.[precision]] [type] conv
5793 GMP adds types @samp{Z}, @samp{Q} and @samp{F} for @code{mpz_t}, @code{mpq_t}
5794 and @code{mpf_t} respectively, @samp{M} for @code{mp_limb_t}, and @samp{N} for
5795 an @code{mp_limb_t} array. @samp{Z}, @samp{Q}, @samp{M} and @samp{N} behave
5796 like integers. @samp{Q} will print a @samp{/} and a denominator, if needed.
5797 @samp{F} behaves like a float. For example,
5801 gmp_printf ("%s is an mpz %Zd\n", "here", z);
5804 gmp_printf ("a hex rational: %#40Qx\n", q);
5808 gmp_printf ("fixed point mpf %.*Ff with %d digits\n", n, f, n);
5811 gmp_printf ("limb %Mu\n", l);
5813 const mp_limb_t *ptr;
5815 gmp_printf ("limb array %Nx\n", ptr, size);
5818 For @samp{N} the limbs are expected least significant first, as per the
5819 @code{mpn} functions (@pxref{Low-level Functions}). A negative size can be
5820 given to print the value as a negative.
5822 All the standard C @code{printf} types behave the same as the C library
5823 @code{printf}, and can be freely intermixed with the GMP extensions. In the
5824 current implementation the standard parts of the format string are simply
5825 handed to @code{printf} and only the GMP extensions handled directly.
5827 The flags accepted are as follows. GLIBC style @nisamp{'} is only for the
5828 standard C types (not the GMP types), and only if the C library supports it.
5831 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
5832 @item @nicode{0} @tab pad with zeros (rather than spaces)
5833 @item @nicode{#} @tab show the base with @samp{0x}, @samp{0X} or @samp{0}
5834 @item @nicode{+} @tab always show a sign
5835 @item (space) @tab show a space or a @samp{-} sign
5836 @item @nicode{'} @tab group digits, GLIBC style (not GMP types)
5840 The optional width and precision can be given as a number within the format
5841 string, or as a @samp{*} to take an extra parameter of type @code{int}, the
5842 same as the standard @code{printf}.
5844 The standard types accepted are as follows. @samp{h} and @samp{l} are
5845 portable, the rest will depend on the compiler (or include files) for the type
5846 and the C library for the output.
5849 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
5850 @item @nicode{h} @tab @nicode{short}
5851 @item @nicode{hh} @tab @nicode{char}
5852 @item @nicode{j} @tab @nicode{intmax_t} or @nicode{uintmax_t}
5853 @item @nicode{l} @tab @nicode{long} or @nicode{wchar_t}
5854 @item @nicode{ll} @tab @nicode{long long}
5855 @item @nicode{L} @tab @nicode{long double}
5856 @item @nicode{q} @tab @nicode{quad_t} or @nicode{u_quad_t}
5857 @item @nicode{t} @tab @nicode{ptrdiff_t}
5858 @item @nicode{z} @tab @nicode{size_t}
5866 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
5867 @item @nicode{F} @tab @nicode{mpf_t}, float conversions
5868 @item @nicode{Q} @tab @nicode{mpq_t}, integer conversions
5869 @item @nicode{M} @tab @nicode{mp_limb_t}, integer conversions
5870 @item @nicode{N} @tab @nicode{mp_limb_t} array, integer conversions
5871 @item @nicode{Z} @tab @nicode{mpz_t}, integer conversions
5875 The conversions accepted are as follows. @samp{a} and @samp{A} are always
5876 supported for @code{mpf_t} but depend on the C library for standard C float
5877 types. @samp{m} and @samp{p} depend on the C library.
5880 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
5881 @item @nicode{a} @nicode{A} @tab hex floats, C99 style
5882 @item @nicode{c} @tab character
5883 @item @nicode{d} @tab decimal integer
5884 @item @nicode{e} @nicode{E} @tab scientific format float
5885 @item @nicode{f} @tab fixed point float
5886 @item @nicode{i} @tab same as @nicode{d}
5887 @item @nicode{g} @nicode{G} @tab fixed or scientific float
5888 @item @nicode{m} @tab @code{strerror} string, GLIBC style
5889 @item @nicode{n} @tab store characters written so far
5890 @item @nicode{o} @tab octal integer
5891 @item @nicode{p} @tab pointer
5892 @item @nicode{s} @tab string
5893 @item @nicode{u} @tab unsigned integer
5894 @item @nicode{x} @nicode{X} @tab hex integer
5898 @samp{o}, @samp{x} and @samp{X} are unsigned for the standard C types, but for
5899 types @samp{Z}, @samp{Q} and @samp{N} they are signed. @samp{u} is not
5900 meaningful for @samp{Z}, @samp{Q} and @samp{N}.
5902 @samp{M} is a proxy for the C library @samp{l} or @samp{L}, according to the
5903 size of @code{mp_limb_t}. Unsigned conversions will be usual, but a signed
5904 conversion can be used and will interpret the value as a twos complement
5907 @samp{n} can be used with any type, even the GMP types.
5909 Other types or conversions that might be accepted by the C library
5910 @code{printf} cannot be used through @code{gmp_printf}, this includes for
5911 instance extensions registered with GLIBC @code{register_printf_function}.
5912 Also currently there's no support for POSIX @samp{$} style numbered arguments
5913 (perhaps this will be added in the future).
5915 The precision field has it's usual meaning for integer @samp{Z} and float
5916 @samp{F} types, but is currently undefined for @samp{Q} and should not be used
5919 @code{mpf_t} conversions only ever generate as many digits as can be
5920 accurately represented by the operand, the same as @code{mpf_get_str} does.
5921 Zeros will be used if necessary to pad to the requested precision. This
5922 happens even for an @samp{f} conversion of an @code{mpf_t} which is an
5923 integer, for instance @math{2^@W{1024}} in an @code{mpf_t} of 128 bits
5924 precision will only produce about 40 digits, then pad with zeros to the
5925 decimal point. An empty precision field like @samp{%.Fe} or @samp{%.Ff} can
5926 be used to specifically request just the significant digits.
5928 The decimal point character (or string) is taken from the current locale
5929 settings on systems which provide @code{localeconv} (@pxref{Locales,, Locales
5930 and Internationalization, libc, The GNU C Library Reference Manual}). The C
5931 library will normally do the same for standard float output.
5933 The format string is only interpreted as plain @code{char}s, multibyte
5934 characters are not recognised. Perhaps this will change in the future.
5937 @node Formatted Output Functions, C++ Formatted Output, Formatted Output Strings, Formatted Output
5939 @cindex Output functions
5941 Each of the following functions is similar to the corresponding C library
5942 function. The basic @code{printf} forms take a variable argument list. The
5943 @code{vprintf} forms take an argument pointer, see @ref{Variadic Functions,,
5944 Variadic Functions, libc, The GNU C Library Reference Manual}, or @samp{man 3
5947 It should be emphasised that if a format string is invalid, or the arguments
5948 don't match what the format specifies, then the behaviour of any of these
5949 functions will be unpredictable. GCC format string checking is not available,
5950 since it doesn't recognise the GMP extensions.
5952 The file based functions @code{gmp_printf} and @code{gmp_fprintf} will return
5953 @math{-1} to indicate a write error. Output is not ``atomic'', so partial
5954 output may be produced if a write error occurs. All the functions can return
5955 @math{-1} if the C library @code{printf} variant in use returns @math{-1}, but
5956 this shouldn't normally occur.
5958 @deftypefun int gmp_printf (const char *@var{fmt}, @dots{})
5959 @deftypefunx int gmp_vprintf (const char *@var{fmt}, va_list @var{ap})
5960 Print to the standard output @code{stdout}. Return the number of characters
5961 written, or @math{-1} if an error occurred.
5964 @deftypefun int gmp_fprintf (FILE *@var{fp}, const char *@var{fmt}, @dots{})
5965 @deftypefunx int gmp_vfprintf (FILE *@var{fp}, const char *@var{fmt}, va_list @var{ap})
5966 Print to the stream @var{fp}. Return the number of characters written, or
5967 @math{-1} if an error occurred.
5970 @deftypefun int gmp_sprintf (char *@var{buf}, const char *@var{fmt}, @dots{})
5971 @deftypefunx int gmp_vsprintf (char *@var{buf}, const char *@var{fmt}, va_list @var{ap})
5972 Form a null-terminated string in @var{buf}. Return the number of characters
5973 written, excluding the terminating null.
5975 No overlap is permitted between the space at @var{buf} and the string
5978 These functions are not recommended, since there's no protection against
5979 exceeding the space available at @var{buf}.
5982 @deftypefun int gmp_snprintf (char *@var{buf}, size_t @var{size}, const char *@var{fmt}, @dots{})
5983 @deftypefunx int gmp_vsnprintf (char *@var{buf}, size_t @var{size}, const char *@var{fmt}, va_list @var{ap})
5984 Form a null-terminated string in @var{buf}. No more than @var{size} bytes
5985 will be written. To get the full output, @var{size} must be enough for the
5986 string and null-terminator.
5988 The return value is the total number of characters which ought to have been
5989 produced, excluding the terminating null. If @math{@var{retval} @ge{}
5990 @var{size}} then the actual output has been truncated to the first
5991 @math{@var{size}-1} characters, and a null appended.
5993 No overlap is permitted between the region @{@var{buf},@var{size}@} and the
5996 Notice the return value is in ISO C99 @code{snprintf} style. This is so even
5997 if the C library @code{vsnprintf} is the older GLIBC 2.0.x style.
6000 @deftypefun int gmp_asprintf (char **@var{pp}, const char *@var{fmt}, @dots{})
6001 @deftypefunx int gmp_vasprintf (char **@var{pp}, const char *@var{fmt}, va_list @var{ap})
6002 Form a null-terminated string in a block of memory obtained from the current
6003 memory allocation function (@pxref{Custom Allocation}). The block will be the
6004 size of the string and null-terminator. The address of the block in stored to
6005 *@var{pp}. The return value is the number of characters produced, excluding
6006 the null-terminator.
6008 Unlike the C library @code{asprintf}, @code{gmp_asprintf} doesn't return
6009 @math{-1} if there's no more memory available, it lets the current allocation
6010 function handle that.
6013 @deftypefun int gmp_obstack_printf (struct obstack *@var{ob}, const char *@var{fmt}, @dots{})
6014 @deftypefunx int gmp_obstack_vprintf (struct obstack *@var{ob}, const char *@var{fmt}, va_list @var{ap})
6015 @cindex @code{obstack} output
6016 Append to the current object in @var{ob}. The return value is the number of
6017 characters written. A null-terminator is not written.
6019 @var{fmt} cannot be within the current object in @var{ob}, since that object
6020 might move as it grows.
6022 These functions are available only when the C library provides the obstack
6023 feature, which probably means only on GNU systems, see @ref{Obstacks,,
6024 Obstacks, libc, The GNU C Library Reference Manual}.
6028 @node C++ Formatted Output, , Formatted Output Functions, Formatted Output
6029 @section C++ Formatted Output
6030 @cindex C++ @code{ostream} output
6031 @cindex @code{ostream} output
6033 The following functions are provided in @file{libgmpxx} (@pxref{Headers and
6034 Libraries}), which is built if C++ support is enabled (@pxref{Build Options}).
6035 Prototypes are available from @code{<gmp.h>}.
6037 @deftypefun ostream& operator<< (ostream& @var{stream}, mpz_t @var{op})
6038 Print @var{op} to @var{stream}, using its @code{ios} formatting settings.
6039 @code{ios::width} is reset to 0 after output, the same as the standard
6040 @code{ostream operator<<} routines do.
6042 In hex or octal, @var{op} is printed as a signed number, the same as for
6043 decimal. This is unlike the standard @code{operator<<} routines on @code{int}
6044 etc, which instead give twos complement.
6047 @deftypefun ostream& operator<< (ostream& @var{stream}, mpq_t @var{op})
6048 Print @var{op} to @var{stream}, using its @code{ios} formatting settings.
6049 @code{ios::width} is reset to 0 after output, the same as the standard
6050 @code{ostream operator<<} routines do.
6052 Output will be a fraction like @samp{5/9}, or if the denominator is 1 then
6053 just a plain integer like @samp{123}.
6055 In hex or octal, @var{op} is printed as a signed value, the same as for
6056 decimal. If @code{ios::showbase} is set then a base indicator is shown on
6057 both the numerator and denominator (if the denominator is required).
6060 @deftypefun ostream& operator<< (ostream& @var{stream}, mpf_t @var{op})
6061 Print @var{op} to @var{stream}, using its @code{ios} formatting settings.
6062 @code{ios::width} is reset to 0 after output, the same as the standard
6063 @code{ostream operator<<} routines do.
6065 The decimal point follows the standard library float @code{operator<<}, which
6066 on recent systems means the @code{std::locale} imbued on @var{stream}.
6068 Hex and octal are supported, unlike the standard @code{operator<<} on
6069 @code{double}. The mantissa will be in hex or octal, the exponent will be in
6070 decimal. For hex the exponent delimiter is an @samp{@@}. This is as per
6073 @code{ios::showbase} is supported, and will put a base on the mantissa, for
6074 example hex @samp{0x1.8} or @samp{0x0.8}, or octal @samp{01.4} or @samp{00.4}.
6075 This last form is slightly strange, but at least differentiates itself from
6079 These operators mean that GMP types can be printed in the usual C++ way, for
6086 cout << "iteration " << n << " value " << z << "\n";
6089 But note that @code{ostream} output (and @code{istream} input, @pxref{C++
6090 Formatted Input}) is the only overloading available for the GMP types and that
6091 for instance using @code{+} with an @code{mpz_t} will have unpredictable
6092 results. For classes with overloading, see @ref{C++ Class Interface}.
6095 @node Formatted Input, C++ Class Interface, Formatted Output, Top
6096 @chapter Formatted Input
6097 @cindex Formatted input
6098 @cindex @code{scanf} formatted input
6101 * Formatted Input Strings::
6102 * Formatted Input Functions::
6103 * C++ Formatted Input::
6107 @node Formatted Input Strings, Formatted Input Functions, Formatted Input, Formatted Input
6108 @section Formatted Input Strings
6110 @code{gmp_scanf} and friends accept format strings similar to the standard C
6111 @code{scanf} (@pxref{Formatted Input,, Formatted Input, libc, The GNU C
6112 Library Reference Manual}). A format specification is of the form
6115 % [flags] [width] [type] conv
6118 GMP adds types @samp{Z}, @samp{Q} and @samp{F} for @code{mpz_t}, @code{mpq_t}
6119 and @code{mpf_t} respectively. @samp{Z} and @samp{Q} behave like integers.
6120 @samp{Q} will read a @samp{/} and a denominator, if present. @samp{F} behaves
6123 GMP variables don't require an @code{&} when passed to @code{gmp_scanf}, since
6124 they're already ``call-by-reference''. For example,
6127 /* to read say "a(5) = 1234" */
6130 gmp_scanf ("a(%d) = %Zd\n", &n, z);
6133 gmp_sscanf ("0377 + 0x10/0x11", "%Qi + %Qi", q1, q2);
6135 /* to read say "topleft (1.55,-2.66)" */
6138 gmp_scanf ("%31s (%Ff,%Ff)", buf, x, y);
6141 All the standard C @code{scanf} types behave the same as in the C library
6142 @code{scanf}, and can be freely intermixed with the GMP extensions. In the
6143 current implementation the standard parts of the format string are simply
6144 handed to @code{scanf} and only the GMP extensions handled directly.
6146 The flags accepted are as follows. @samp{a} and @samp{'} will depend on
6147 support from the C library, and @samp{'} cannot be used with GMP types.
6150 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
6151 @item @nicode{*} @tab read but don't store
6152 @item @nicode{a} @tab allocate a buffer (string conversions)
6153 @item @nicode{'} @tab grouped digits, GLIBC style (not GMP types)
6157 The standard types accepted are as follows. @samp{h} and @samp{l} are
6158 portable, the rest will depend on the compiler (or include files) for the type
6159 and the C library for the input.
6162 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
6163 @item @nicode{h} @tab @nicode{short}
6164 @item @nicode{hh} @tab @nicode{char}
6165 @item @nicode{j} @tab @nicode{intmax_t} or @nicode{uintmax_t}
6166 @item @nicode{l} @tab @nicode{long int}, @nicode{double} or @nicode{wchar_t}
6167 @item @nicode{ll} @tab @nicode{long long}
6168 @item @nicode{L} @tab @nicode{long double}
6169 @item @nicode{q} @tab @nicode{quad_t} or @nicode{u_quad_t}
6170 @item @nicode{t} @tab @nicode{ptrdiff_t}
6171 @item @nicode{z} @tab @nicode{size_t}
6179 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
6180 @item @nicode{F} @tab @nicode{mpf_t}, float conversions
6181 @item @nicode{Q} @tab @nicode{mpq_t}, integer conversions
6182 @item @nicode{Z} @tab @nicode{mpz_t}, integer conversions
6186 The conversions accepted are as follows. @samp{p} and @samp{[} will depend on
6187 support from the C library, the rest are standard.
6190 @multitable {(space)} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
6191 @item @nicode{c} @tab character or characters
6192 @item @nicode{d} @tab decimal integer
6193 @item @nicode{e} @nicode{E} @nicode{f} @nicode{g} @nicode{G}
6195 @item @nicode{i} @tab integer with base indicator
6196 @item @nicode{n} @tab characters read so far
6197 @item @nicode{o} @tab octal integer
6198 @item @nicode{p} @tab pointer
6199 @item @nicode{s} @tab string of non-whitespace characters
6200 @item @nicode{u} @tab decimal integer
6201 @item @nicode{x} @nicode{X} @tab hex integer
6202 @item @nicode{[} @tab string of characters in a set
6206 @samp{e}, @samp{E}, @samp{f}, @samp{g} and @samp{G} are identical, they all
6207 read either fixed point or scientific format, and either upper or lower case
6208 @samp{e} for the exponent in scientific format.
6210 C99 style hex float format (@code{printf %a}, @pxref{Formatted Output
6211 Strings}) is always accepted for @code{mpf_t}, but for the standard float
6212 types it will depend on the C library.
6214 @samp{x} and @samp{X} are identical, both accept both upper and lower case
6217 @samp{o}, @samp{u}, @samp{x} and @samp{X} all read positive or negative
6218 values. For the standard C types these are described as ``unsigned''
6219 conversions, but that merely affects certain overflow handling, negatives are
6220 still allowed (per @code{strtoul}, @pxref{Parsing of Integers,, Parsing of
6221 Integers, libc, The GNU C Library Reference Manual}). For GMP types there are
6222 no overflows, so @samp{d} and @samp{u} are identical.
6224 @samp{Q} type reads the numerator and (optional) denominator as given. If the
6225 value might not be in canonical form then @code{mpq_canonicalize} must be
6226 called before using it in any calculations (@pxref{Rational Number
6229 @samp{Qi} will read a base specification separately for the numerator and
6230 denominator. For example @samp{0x10/11} would be 16/11, whereas
6231 @samp{0x10/0x11} would be 16/17.
6233 @samp{n} can be used with any of the types above, even the GMP types.
6234 @samp{*} to suppress assignment is allowed, though in that case it would do
6237 Other conversions or types that might be accepted by the C library
6238 @code{scanf} cannot be used through @code{gmp_scanf}.
6240 Whitespace is read and discarded before a field, except for @samp{c} and
6241 @samp{[} conversions.
6243 For float conversions, the decimal point character (or string) expected is
6244 taken from the current locale settings on systems which provide
6245 @code{localeconv} (@pxref{Locales,, Locales and Internationalization, libc,
6246 The GNU C Library Reference Manual}). The C library will normally do the same
6247 for standard float input.
6249 The format string is only interpreted as plain @code{char}s, multibyte
6250 characters are not recognised. Perhaps this will change in the future.
6253 @node Formatted Input Functions, C++ Formatted Input, Formatted Input Strings, Formatted Input
6254 @section Formatted Input Functions
6255 @cindex Input functions
6257 Each of the following functions is similar to the corresponding C library
6258 function. The plain @code{scanf} forms take a variable argument list. The
6259 @code{vscanf} forms take an argument pointer, see @ref{Variadic Functions,,
6260 Variadic Functions, libc, The GNU C Library Reference Manual}, or @samp{man 3
6263 It should be emphasised that if a format string is invalid, or the arguments
6264 don't match what the format specifies, then the behaviour of any of these
6265 functions will be unpredictable. GCC format string checking is not available,
6266 since it doesn't recognise the GMP extensions.
6268 No overlap is permitted between the @var{fmt} string and any of the results
6271 @deftypefun int gmp_scanf (const char *@var{fmt}, @dots{})
6272 @deftypefunx int gmp_vscanf (const char *@var{fmt}, va_list @var{ap})
6273 Read from the standard input @code{stdin}.
6276 @deftypefun int gmp_fscanf (FILE *@var{fp}, const char *@var{fmt}, @dots{})
6277 @deftypefunx int gmp_vfscanf (FILE *@var{fp}, const char *@var{fmt}, va_list @var{ap})
6278 Read from the stream @var{fp}.
6281 @deftypefun int gmp_sscanf (const char *@var{s}, const char *@var{fmt}, @dots{})
6282 @deftypefunx int gmp_vsscanf (const char *@var{s}, const char *@var{fmt}, va_list @var{ap})
6283 Read from a null-terminated string @var{s}.
6286 The return value from each of these functions is the same as the standard C99
6287 @code{scanf}, namely the number of fields successfully parsed and stored.
6288 @samp{%n} fields and fields read but suppressed by @samp{*} don't count
6289 towards the return value.
6291 If end of input (or a file error) is reached before a character for a field or
6292 a literal, and if no previous non-suppressed fields have matched, then the
6293 return value is @code{EOF} instead of 0. A whitespace character in the format
6294 string is only an optional match and doesn't induce an @code{EOF} in this
6295 fashion. Leading whitespace read and discarded for a field don't count as
6296 characters for that field.
6298 For the GMP types, input parsing follows C99 rules, namely one character of
6299 lookahead is used and characters are read while they continue to meet the
6300 format requirements. If this doesn't provide a complete number then the
6301 function terminates, with that field not stored nor counted towards the return
6302 value. For instance with @code{mpf_t} an input @samp{1.23e-XYZ} would be read
6303 up to the @samp{X} and that character pushed back since it's not a digit. The
6304 string @samp{1.23e-} would then be considered invalid since an @samp{e} must
6305 be followed by at least one digit.
6307 For the standard C types, in the current implementation GMP calls the C
6308 library @code{scanf} functions, which might have looser rules about what
6309 constitutes a valid input.
6311 Note that @code{gmp_sscanf} is the same as @code{gmp_fscanf} and only does one
6312 character of lookahead when parsing. Although clearly it could look at its
6313 entire input, it is deliberately made identical to @code{gmp_fscanf}, the same
6314 way C99 @code{sscanf} is the same as @code{fscanf}.
6317 @node C++ Formatted Input, , Formatted Input Functions, Formatted Input
6318 @section C++ Formatted Input
6319 @cindex C++ @code{istream} input
6320 @cindex @code{istream} input
6322 The following functions are provided in @file{libgmpxx} (@pxref{Headers and
6323 Libraries}), which is built only if C++ support is enabled (@pxref{Build
6324 Options}). Prototypes are available from @code{<gmp.h>}.
6326 @deftypefun istream& operator>> (istream& @var{stream}, mpz_t @var{rop})
6327 Read @var{rop} from @var{stream}, using its @code{ios} formatting settings.
6330 @deftypefun istream& operator>> (istream& @var{stream}, mpq_t @var{rop})
6331 An integer like @samp{123} will be read, or a fraction like @samp{5/9}. No
6332 whitespace is allowed around the @samp{/}. If the fraction is not in
6333 canonical form then @code{mpq_canonicalize} must be called (@pxref{Rational
6334 Number Functions}) before operating on it.
6336 As per integer input, an @samp{0} or @samp{0x} base indicator is read when
6337 none of @code{ios::dec}, @code{ios::oct} or @code{ios::hex} are set. This is
6338 done separately for numerator and denominator, so that for instance
6339 @samp{0x10/11} is @math{16/11} and @samp{0x10/0x11} is @math{16/17}.
6342 @deftypefun istream& operator>> (istream& @var{stream}, mpf_t @var{rop})
6343 Read @var{rop} from @var{stream}, using its @code{ios} formatting settings.
6345 Hex or octal floats are not supported, but might be in the future, or perhaps
6346 it's best to accept only what the standard float @code{operator>>} does.
6349 Note that digit grouping specified by the @code{istream} locale is currently
6350 not accepted. Perhaps this will change in the future.
6353 These operators mean that GMP types can be read in the usual C++ way, for
6362 But note that @code{istream} input (and @code{ostream} output, @pxref{C++
6363 Formatted Output}) is the only overloading available for the GMP types and
6364 that for instance using @code{+} with an @code{mpz_t} will have unpredictable
6365 results. For classes with overloading, see @ref{C++ Class Interface}.
6369 @node C++ Class Interface, BSD Compatible Functions, Formatted Input, Top
6370 @chapter C++ Class Interface
6371 @cindex C++ interface
6373 This chapter describes the C++ class based interface to GMP.
6375 All GMP C language types and functions can be used in C++ programs, since
6376 @file{gmp.h} has @code{extern "C"} qualifiers, but the class interface offers
6377 overloaded functions and operators which may be more convenient.
6379 Due to the implementation of this interface, a reasonably recent C++ compiler
6380 is required, one supporting namespaces, partial specialization of templates
6381 and member templates. For GCC this means version 2.91 or later.
6383 @strong{Everything described in this chapter is to be considered preliminary
6384 and might be subject to incompatible changes if some unforeseen difficulty
6388 * C++ Interface General::
6389 * C++ Interface Integers::
6390 * C++ Interface Rationals::
6391 * C++ Interface Floats::
6392 * C++ Interface Random Numbers::
6393 * C++ Interface Limitations::
6397 @node C++ Interface General, C++ Interface Integers, C++ Class Interface, C++ Class Interface
6398 @section C++ Interface General
6401 All the C++ classes and functions are available with
6403 @cindex @code{gmpxx.h}
6408 Programs should be linked with the @file{libgmpxx} and @file{libgmp}
6409 libraries. For example,
6412 g++ mycxxprog.cc -lgmpxx -lgmp
6416 The classes defined are
6418 @deftp Class mpz_class
6419 @deftpx Class mpq_class
6420 @deftpx Class mpf_class
6423 The standard operators and various standard functions are overloaded to allow
6424 arithmetic with these classes. For example,
6435 cout << "sum is " << c << "\n";
6436 cout << "absolute value is " << abs(c) << "\n";
6442 An important feature of the implementation is that an expression like
6443 @code{a=b+c} results in a single call to the corresponding @code{mpz_add},
6444 without using a temporary for the @code{b+c} part. Expressions which by their
6445 nature imply intermediate values, like @code{a=b*c+d*e}, still use temporaries
6448 The classes can be freely intermixed in expressions, as can the classes and
6449 the standard types @code{long}, @code{unsigned long} and @code{double}.
6450 Smaller types like @code{int} or @code{float} can also be intermixed, since
6451 C++ will promote them.
6453 Note that @code{bool} is not accepted directly, but must be explicitly cast to
6454 an @code{int} first. This is because C++ will automatically convert any
6455 pointer to a @code{bool}, so if GMP accepted @code{bool} it would make all
6456 sorts of invalid class and pointer combinations compile but almost certainly
6457 not do anything sensible.
6459 Conversions back from the classes to standard C++ types aren't done
6460 automatically, instead member functions like @code{get_si} are provided (see
6461 the following sections for details).
6463 Also there are no automatic conversions from the classes to the corresponding
6464 GMP C types, instead a reference to the underlying C object can be obtained
6465 with the following functions,
6467 @deftypefun mpz_t mpz_class::get_mpz_t ()
6468 @deftypefunx mpq_t mpq_class::get_mpq_t ()
6469 @deftypefunx mpf_t mpf_class::get_mpf_t ()
6472 These can be used to call a C function which doesn't have a C++ class
6473 interface. For example to set @code{a} to the GCD of @code{b} and @code{c},
6478 mpz_gcd (a.get_mpz_t(), b.get_mpz_t(), c.get_mpz_t());
6481 In the other direction, a class can be initialized from the corresponding GMP
6482 C type, or assigned to if an explicit constructor is used. In both cases this
6483 makes a copy of the value, it doesn't create any sort of association. For
6488 // ... init and calculate z ...
6494 There are no namespace setups in @file{gmpxx.h}, all types and functions are
6495 simply put into the global namespace. This is what @file{gmp.h} has done in
6496 the past, and continues to do for compatibility. The extras provided by
6497 @file{gmpxx.h} follow GMP naming conventions and are unlikely to clash with
6501 @node C++ Interface Integers, C++ Interface Rationals, C++ Interface General, C++ Class Interface
6502 @section C++ Interface Integers
6504 @deftypefun void mpz_class::mpz_class (type @var{n})
6505 Construct an @code{mpz_class}. All the standard C++ types may be used, except
6506 @code{long long} and @code{long double}, and all the GMP C++ classes can be
6507 used. Any necessary conversion follows the corresponding C function, for
6508 example @code{double} follows @code{mpz_set_d} (@pxref{Assigning Integers}).
6511 @deftypefun void mpz_class::mpz_class (mpz_t @var{z})
6512 Construct an @code{mpz_class} from an @code{mpz_t}. The value in @var{z} is
6513 copied into the new @code{mpz_class}, there won't be any permanent association
6514 between it and @var{z}.
6517 @deftypefun void mpz_class::mpz_class (const char *@var{s})
6518 @deftypefunx void mpz_class::mpz_class (const char *@var{s}, int @var{base} = 0)
6519 @deftypefunx void mpz_class::mpz_class (const string& @var{s})
6520 @deftypefunx void mpz_class::mpz_class (const string& @var{s}, int @var{base} = 0)
6521 Construct an @code{mpz_class} converted from a string using @code{mpz_set_str}
6522 (@pxref{Assigning Integers}).
6524 If the string is not a valid integer, an @code{std::invalid_argument}
6525 exception is thrown. The same applies to @code{operator=}.
6528 @deftypefun mpz_class operator/ (mpz_class @var{a}, mpz_class @var{d})
6529 @deftypefunx mpz_class operator% (mpz_class @var{a}, mpz_class @var{d})
6530 Divisions involving @code{mpz_class} round towards zero, as per the
6531 @code{mpz_tdiv_q} and @code{mpz_tdiv_r} functions (@pxref{Integer Division}).
6532 This is the same as the C99 @code{/} and @code{%} operators.
6534 The @code{mpz_fdiv@dots{}} or @code{mpz_cdiv@dots{}} functions can always be called
6535 directly if desired. For example,
6540 mpz_fdiv_q (q.get_mpz_t(), a.get_mpz_t(), d.get_mpz_t());
6544 @deftypefun mpz_class abs (mpz_class @var{op1})
6545 @deftypefunx int cmp (mpz_class @var{op1}, type @var{op2})
6546 @deftypefunx int cmp (type @var{op1}, mpz_class @var{op2})
6548 @deftypefunx bool mpz_class::fits_sint_p (void)
6549 @deftypefunx bool mpz_class::fits_slong_p (void)
6550 @deftypefunx bool mpz_class::fits_sshort_p (void)
6552 @deftypefunx bool mpz_class::fits_uint_p (void)
6553 @deftypefunx bool mpz_class::fits_ulong_p (void)
6554 @deftypefunx bool mpz_class::fits_ushort_p (void)
6556 @deftypefunx double mpz_class::get_d (void)
6557 @deftypefunx long mpz_class::get_si (void)
6558 @deftypefunx string mpz_class::get_str (int @var{base} = 10)
6559 @deftypefunx {unsigned long} mpz_class::get_ui (void)
6561 @deftypefunx int mpz_class::set_str (const char *@var{str}, int @var{base})
6562 @deftypefunx int mpz_class::set_str (const string& @var{str}, int @var{base})
6563 @deftypefunx int sgn (mpz_class @var{op})
6564 @deftypefunx mpz_class sqrt (mpz_class @var{op})
6565 These functions provide a C++ class interface to the corresponding GMP C
6568 @code{cmp} can be used with any of the classes or the standard C++ types,
6569 except @code{long long} and @code{long double}.
6573 Overloaded operators for combinations of @code{mpz_class} and @code{double}
6574 are provided for completeness, but it should be noted that if the given
6575 @code{double} is not an integer then the way any rounding is done is currently
6576 unspecified. The rounding might take place at the start, in the middle, or at
6577 the end of the operation, and it might change in the future.
6579 Conversions between @code{mpz_class} and @code{double}, however, are defined
6580 to follow the corresponding C functions @code{mpz_get_d} and @code{mpz_set_d}.
6581 And comparisons are always made exactly, as per @code{mpz_cmp_d}.
6584 @node C++ Interface Rationals, C++ Interface Floats, C++ Interface Integers, C++ Class Interface
6585 @section C++ Interface Rationals
6587 In all the following constructors, if a fraction is given then it should be in
6588 canonical form, or if not then @code{mpq_class::canonicalize} called.
6590 @deftypefun void mpq_class::mpq_class (type @var{op})
6591 @deftypefunx void mpq_class::mpq_class (integer @var{num}, integer @var{den})
6592 Construct an @code{mpq_class}. The initial value can be a single value of any
6593 type, or a pair of integers (@code{mpz_class} or standard C++ integer types)
6594 representing a fraction, except that @code{long long} and @code{long double}
6595 are not supported. For example,
6604 @deftypefun void mpq_class::mpq_class (mpq_t @var{q})
6605 Construct an @code{mpq_class} from an @code{mpq_t}. The value in @var{q} is
6606 copied into the new @code{mpq_class}, there won't be any permanent association
6607 between it and @var{q}.
6610 @deftypefun void mpq_class::mpq_class (const char *@var{s})
6611 @deftypefunx void mpq_class::mpq_class (const char *@var{s}, int @var{base} = 0)
6612 @deftypefunx void mpq_class::mpq_class (const string& @var{s})
6613 @deftypefunx void mpq_class::mpq_class (const string& @var{s}, int @var{base} = 0)
6614 Construct an @code{mpq_class} converted from a string using @code{mpq_set_str}
6615 (@pxref{Initializing Rationals}).
6617 If the string is not a valid rational, an @code{std::invalid_argument}
6618 exception is thrown. The same applies to @code{operator=}.
6621 @deftypefun void mpq_class::canonicalize ()
6622 Put an @code{mpq_class} into canonical form, as per @ref{Rational Number
6623 Functions}. All arithmetic operators require their operands in canonical
6624 form, and will return results in canonical form.
6627 @deftypefun mpq_class abs (mpq_class @var{op})
6628 @deftypefunx int cmp (mpq_class @var{op1}, type @var{op2})
6629 @deftypefunx int cmp (type @var{op1}, mpq_class @var{op2})
6631 @deftypefunx double mpq_class::get_d (void)
6632 @deftypefunx string mpq_class::get_str (int @var{base} = 10)
6634 @deftypefunx int mpq_class::set_str (const char *@var{str}, int @var{base})
6635 @deftypefunx int mpq_class::set_str (const string& @var{str}, int @var{base})
6636 @deftypefunx int sgn (mpq_class @var{op})
6637 These functions provide a C++ class interface to the corresponding GMP C
6640 @code{cmp} can be used with any of the classes or the standard C++ types,
6641 except @code{long long} and @code{long double}.
6644 @deftypefun {mpz_class&} mpq_class::get_num ()
6645 @deftypefunx {mpz_class&} mpq_class::get_den ()
6646 Get a reference to an @code{mpz_class} which is the numerator or denominator
6647 of an @code{mpq_class}. This can be used both for read and write access. If
6648 the object returned is modified, it modifies the original @code{mpq_class}.
6650 If direct manipulation might produce a non-canonical value, then
6651 @code{mpq_class::canonicalize} must be called before further operations.
6654 @deftypefun mpz_t mpq_class::get_num_mpz_t ()
6655 @deftypefunx mpz_t mpq_class::get_den_mpz_t ()
6656 Get a reference to the underlying @code{mpz_t} numerator or denominator of an
6657 @code{mpq_class}. This can be passed to C functions expecting an
6658 @code{mpz_t}. Any modifications made to the @code{mpz_t} will modify the
6659 original @code{mpq_class}.
6661 If direct manipulation might produce a non-canonical value, then
6662 @code{mpq_class::canonicalize} must be called before further operations.
6665 @deftypefun istream& operator>> (istream& @var{stream}, mpq_class& @var{rop});
6666 Read @var{rop} from @var{stream}, using its @code{ios} formatting settings,
6667 the same as @code{mpq_t operator>>} (@pxref{C++ Formatted Input}).
6669 If the @var{rop} read might not be in canonical form then
6670 @code{mpq_class::canonicalize} must be called.
6674 @node C++ Interface Floats, C++ Interface Random Numbers, C++ Interface Rationals, C++ Class Interface
6675 @section C++ Interface Floats
6677 When an expression requires the use of temporary intermediate @code{mpf_class}
6678 values, like @code{f=g*h+x*y}, those temporaries will have the same precision
6679 as the destination @code{f}. Explicit constructors can be used if this
6682 @deftypefun {} mpf_class::mpf_class (type @var{op})
6683 @deftypefunx {} mpf_class::mpf_class (type @var{op}, unsigned long @var{prec})
6684 Construct an @code{mpf_class}. Any standard C++ type can be used, except
6685 @code{long long} and @code{long double}, and any of the GMP C++ classes can be
6688 If @var{prec} is given, the initial precision is that value, in bits. If
6689 @var{prec} is not given, then the initial precision is determined by the type
6690 of @var{op} given. An @code{mpz_class}, @code{mpq_class}, or C++
6691 builtin type will give the default @code{mpf} precision (@pxref{Initializing
6692 Floats}). An @code{mpf_class} or expression will give the precision of that
6693 value. The precision of a binary expression is the higher of the two
6697 mpf_class f(1.5); // default precision
6698 mpf_class f(1.5, 500); // 500 bits (at least)
6699 mpf_class f(x); // precision of x
6700 mpf_class f(abs(x)); // precision of x
6701 mpf_class f(-g, 1000); // 1000 bits (at least)
6702 mpf_class f(x+y); // greater of precisions of x and y
6706 @deftypefun void mpf_class::mpf_class (const char *@var{s})
6707 @deftypefunx void mpf_class::mpf_class (const char *@var{s}, unsigned long @var{prec}, int @var{base} = 0)
6708 @deftypefunx void mpf_class::mpf_class (const string& @var{s})
6709 @deftypefunx void mpf_class::mpf_class (const string& @var{s}, unsigned long @var{prec}, int @var{base} = 0)
6710 Construct an @code{mpf_class} converted from a string using @code{mpf_set_str}
6711 (@pxref{Assigning Floats}). If @var{prec} is given, the initial precision is
6712 that value, in bits. If not, the default @code{mpf} precision
6713 (@pxref{Initializing Floats}) is used.
6715 If the string is not a valid float, an @code{std::invalid_argument} exception
6716 is thrown. The same applies to @code{operator=}.
6719 @deftypefun {mpf_class&} mpf_class::operator= (type @var{op})
6720 Convert and store the given @var{op} value to an @code{mpf_class} object. The
6721 same types are accepted as for the constructors above.
6723 Note that @code{operator=} only stores a new value, it doesn't copy or change
6724 the precision of the destination, instead the value is truncated if necessary.
6725 This is the same as @code{mpf_set} etc. Note in particular this means for
6726 @code{mpf_class} a copy constructor is not the same as a default constructor
6730 mpf_class x (y); // x created with precision of y
6732 mpf_class x; // x created with default precision
6733 x = y; // value truncated to that precision
6736 Applications using templated code may need to be careful about the assumptions
6737 the code makes in this area, when working with @code{mpf_class} values of
6738 various different or non-default precisions. For instance implementations of
6739 the standard @code{complex} template have been seen in both styles above,
6740 though of course @code{complex} is normally only actually specified for use
6741 with the builtin float types.
6744 @deftypefun mpf_class abs (mpf_class @var{op})
6745 @deftypefunx mpf_class ceil (mpf_class @var{op})
6746 @deftypefunx int cmp (mpf_class @var{op1}, type @var{op2})
6747 @deftypefunx int cmp (type @var{op1}, mpf_class @var{op2})
6749 @deftypefunx bool mpf_class::fits_sint_p (void)
6750 @deftypefunx bool mpf_class::fits_slong_p (void)
6751 @deftypefunx bool mpf_class::fits_sshort_p (void)
6753 @deftypefunx bool mpf_class::fits_uint_p (void)
6754 @deftypefunx bool mpf_class::fits_ulong_p (void)
6755 @deftypefunx bool mpf_class::fits_ushort_p (void)
6757 @deftypefunx mpf_class floor (mpf_class @var{op})
6758 @deftypefunx mpf_class hypot (mpf_class @var{op1}, mpf_class @var{op2})
6760 @deftypefunx double mpf_class::get_d (void)
6761 @deftypefunx long mpf_class::get_si (void)
6762 @deftypefunx string mpf_class::get_str (mp_exp_t& @var{exp}, int @var{base} = 10, size_t @var{digits} = 0)
6763 @deftypefunx {unsigned long} mpf_class::get_ui (void)
6765 @deftypefunx int mpf_class::set_str (const char *@var{str}, int @var{base})
6766 @deftypefunx int mpf_class::set_str (const string& @var{str}, int @var{base})
6767 @deftypefunx int sgn (mpf_class @var{op})
6768 @deftypefunx mpf_class sqrt (mpf_class @var{op})
6769 @deftypefunx mpf_class trunc (mpf_class @var{op})
6770 These functions provide a C++ class interface to the corresponding GMP C
6773 @code{cmp} can be used with any of the classes or the standard C++ types,
6774 except @code{long long} and @code{long double}.
6776 The accuracy provided by @code{hypot} is not currently guaranteed.
6779 @deftypefun {mp_bitcnt_t} mpf_class::get_prec ()
6780 @deftypefunx void mpf_class::set_prec (mp_bitcnt_t @var{prec})
6781 @deftypefunx void mpf_class::set_prec_raw (mp_bitcnt_t @var{prec})
6782 Get or set the current precision of an @code{mpf_class}.
6784 The restrictions described for @code{mpf_set_prec_raw} (@pxref{Initializing
6785 Floats}) apply to @code{mpf_class::set_prec_raw}. Note in particular that the
6786 @code{mpf_class} must be restored to it's allocated precision before being
6787 destroyed. This must be done by application code, there's no automatic
6792 @node C++ Interface Random Numbers, C++ Interface Limitations, C++ Interface Floats, C++ Class Interface
6793 @section C++ Interface Random Numbers
6795 @deftp Class gmp_randclass
6796 The C++ class interface to the GMP random number functions uses
6797 @code{gmp_randclass} to hold an algorithm selection and current state, as per
6798 @code{gmp_randstate_t}.
6801 @deftypefun {} gmp_randclass::gmp_randclass (void (*@var{randinit}) (gmp_randstate_t, @dots{}), @dots{})
6802 Construct a @code{gmp_randclass}, using a call to the given @var{randinit}
6803 function (@pxref{Random State Initialization}). The arguments expected are
6804 the same as @var{randinit}, but with @code{mpz_class} instead of @code{mpz_t}.
6808 gmp_randclass r1 (gmp_randinit_default);
6809 gmp_randclass r2 (gmp_randinit_lc_2exp_size, 32);
6810 gmp_randclass r3 (gmp_randinit_lc_2exp, a, c, m2exp);
6811 gmp_randclass r4 (gmp_randinit_mt);
6814 @code{gmp_randinit_lc_2exp_size} will fail if the size requested is too big,
6815 an @code{std::length_error} exception is thrown in that case.
6818 @deftypefun {} gmp_randclass::gmp_randclass (gmp_randalg_t @var{alg}, @dots{})
6819 Construct a @code{gmp_randclass} using the same parameters as
6820 @code{gmp_randinit} (@pxref{Random State Initialization}). This function is
6821 obsolete and the above @var{randinit} style should be preferred.
6824 @deftypefun void gmp_randclass::seed (unsigned long int @var{s})
6825 @deftypefunx void gmp_randclass::seed (mpz_class @var{s})
6826 Seed a random number generator. See @pxref{Random Number Functions}, for how
6827 to choose a good seed.
6830 @deftypefun mpz_class gmp_randclass::get_z_bits (unsigned long @var{bits})
6831 @deftypefunx mpz_class gmp_randclass::get_z_bits (mpz_class @var{bits})
6832 Generate a random integer with a specified number of bits.
6835 @deftypefun mpz_class gmp_randclass::get_z_range (mpz_class @var{n})
6836 Generate a random integer in the range 0 to @math{@var{n}-1} inclusive.
6839 @deftypefun mpf_class gmp_randclass::get_f ()
6840 @deftypefunx mpf_class gmp_randclass::get_f (unsigned long @var{prec})
6841 Generate a random float @var{f} in the range @math{0 <= @var{f} < 1}. @var{f}
6842 will be to @var{prec} bits precision, or if @var{prec} is not given then to
6843 the precision of the destination. For example,
6848 mpf_class f (0, 512); // 512 bits precision
6849 f = r.get_f(); // random number, 512 bits
6855 @node C++ Interface Limitations, , C++ Interface Random Numbers, C++ Class Interface
6856 @section C++ Interface Limitations
6859 @item @code{mpq_class} and Templated Reading
6860 A generic piece of template code probably won't know that @code{mpq_class}
6861 requires a @code{canonicalize} call if inputs read with @code{operator>>}
6862 might be non-canonical. This can lead to incorrect results.
6864 @code{operator>>} behaves as it does for reasons of efficiency. A
6865 canonicalize can be quite time consuming on large operands, and is best
6866 avoided if it's not necessary.
6868 But this potential difficulty reduces the usefulness of @code{mpq_class}.
6869 Perhaps a mechanism to tell @code{operator>>} what to do will be adopted in
6870 the future, maybe a preprocessor define, a global flag, or an @code{ios} flag
6871 pressed into service. Or maybe, at the risk of inconsistency, the
6872 @code{mpq_class} @code{operator>>} could canonicalize and leave @code{mpq_t}
6873 @code{operator>>} not doing so, for use on those occasions when that's
6874 acceptable. Send feedback or alternate ideas to @email{gmp-bugs@@gmplib.org}.
6877 Subclassing the GMP C++ classes works, but is not currently recommended.
6879 Expressions involving subclasses resolve correctly (or seem to), but in normal
6880 C++ fashion the subclass doesn't inherit constructors and assignments.
6881 There's many of those in the GMP classes, and a good way to reestablish them
6882 in a subclass is not yet provided.
6884 @item Templated Expressions
6885 A subtle difficulty exists when using expressions together with
6886 application-defined template functions. Consider the following, with @code{T}
6887 intended to be some numeric type,
6891 T fun (const T &, const T &);
6895 When used with, say, plain @code{mpz_class} variables, it works fine: @code{T}
6896 is resolved as @code{mpz_class}.
6899 mpz_class f(1), g(2);
6904 But when one of the arguments is an expression, it doesn't work.
6907 mpz_class f(1), g(2), h(3);
6908 fun (f, g+h); // Bad
6911 This is because @code{g+h} ends up being a certain expression template type
6912 internal to @code{gmpxx.h}, which the C++ template resolution rules are unable
6913 to automatically convert to @code{mpz_class}. The workaround is simply to add
6917 mpz_class f(1), g(2), h(3);
6918 fun (f, mpz_class(g+h)); // Good
6921 Similarly, within @code{fun} it may be necessary to cast an expression to type
6922 @code{T} when calling a templated @code{fun2}.
6928 fun2 (f, f+g); // Bad
6934 fun2 (f, T(f+g)); // Good
6940 @node BSD Compatible Functions, Custom Allocation, C++ Class Interface, Top
6941 @comment node-name, next, previous, up
6942 @chapter Berkeley MP Compatible Functions
6943 @cindex Berkeley MP compatible functions
6944 @cindex BSD MP compatible functions
6946 These functions are intended to be fully compatible with the Berkeley MP
6947 library which is available on many BSD derived U*ix systems. The
6948 @samp{--enable-mpbsd} option must be used when building GNU MP to make these
6949 available (@pxref{Installing GMP}).
6951 The original Berkeley MP library has a usage restriction: you cannot use the
6952 same variable as both source and destination in a single function call. The
6953 compatible functions in GNU MP do not share this restriction---inputs and
6954 outputs may overlap.
6956 It is not recommended that new programs are written using these functions.
6957 Apart from the incomplete set of functions, the interface for initializing
6958 @code{MINT} objects is more error prone, and the @code{pow} function collides
6959 with @code{pow} in @file{libm.a}.
6963 Include the header @file{mp.h} to get the definition of the necessary types and
6964 functions. If you are on a BSD derived system, make sure to include GNU
6965 @file{mp.h} if you are going to link the GNU @file{libmp.a} to your program.
6966 This means that you probably need to give the @samp{-I<dir>} option to the
6967 compiler, where @samp{<dir>} is the directory where you have GNU @file{mp.h}.
6969 @deftypefun {MINT *} itom (signed short int @var{initial_value})
6970 Allocate an integer consisting of a @code{MINT} object and dynamic limb space.
6971 Initialize the integer to @var{initial_value}. Return a pointer to the
6975 @deftypefun {MINT *} xtom (char *@var{initial_value})
6976 Allocate an integer consisting of a @code{MINT} object and dynamic limb space.
6977 Initialize the integer from @var{initial_value}, a hexadecimal,
6978 null-terminated C string. Return a pointer to the @code{MINT} object.
6981 @deftypefun void move (MINT *@var{src}, MINT *@var{dest})
6982 Set @var{dest} to @var{src} by copying. Both variables must be previously
6986 @deftypefun void madd (MINT *@var{src_1}, MINT *@var{src_2}, MINT *@var{destination})
6987 Add @var{src_1} and @var{src_2} and put the sum in @var{destination}.
6990 @deftypefun void msub (MINT *@var{src_1}, MINT *@var{src_2}, MINT *@var{destination})
6991 Subtract @var{src_2} from @var{src_1} and put the difference in
6995 @deftypefun void mult (MINT *@var{src_1}, MINT *@var{src_2}, MINT *@var{destination})
6996 Multiply @var{src_1} and @var{src_2} and put the product in @var{destination}.
6999 @deftypefun void mdiv (MINT *@var{dividend}, MINT *@var{divisor}, MINT *@var{quotient}, MINT *@var{remainder})
7000 @deftypefunx void sdiv (MINT *@var{dividend}, signed short int @var{divisor}, MINT *@var{quotient}, signed short int *@var{remainder})
7001 Set @var{quotient} to @var{dividend}/@var{divisor}, and @var{remainder} to
7002 @var{dividend} mod @var{divisor}. The quotient is rounded towards zero; the
7003 remainder has the same sign as the dividend unless it is zero.
7005 Some implementations of these functions work differently---or not at all---for
7009 @deftypefun void msqrt (MINT *@var{op}, MINT *@var{root}, MINT *@var{remainder})
7010 Set @var{root} to @m{\lfloor\sqrt{@var{op}}\rfloor, the truncated integer part
7011 of the square root of @var{op}}, like @code{mpz_sqrt}. Set @var{remainder} to
7012 @m{(@var{op} - @var{root}^2), @var{op}@minus{}@var{root}*@var{root}}, i.e.
7013 zero if @var{op} is a perfect square.
7015 If @var{root} and @var{remainder} are the same variable, the results are
7019 @deftypefun void pow (MINT *@var{base}, MINT *@var{exp}, MINT *@var{mod}, MINT *@var{dest})
7020 Set @var{dest} to (@var{base} raised to @var{exp}) modulo @var{mod}.
7022 Note that the name @code{pow} clashes with @code{pow} from the standard C math
7023 library (@pxref{Exponents and Logarithms,, Exponentiation and Logarithms,
7024 libc, The GNU C Library Reference Manual}). An application will only be able
7025 to use one or the other.
7028 @deftypefun void rpow (MINT *@var{base}, signed short int @var{exp}, MINT *@var{dest})
7029 Set @var{dest} to @var{base} raised to @var{exp}.
7032 @deftypefun void gcd (MINT *@var{op1}, MINT *@var{op2}, MINT *@var{res})
7033 Set @var{res} to the greatest common divisor of @var{op1} and @var{op2}.
7036 @deftypefun int mcmp (MINT *@var{op1}, MINT *@var{op2})
7037 Compare @var{op1} and @var{op2}. Return a positive value if @var{op1} >
7038 @var{op2}, zero if @var{op1} = @var{op2}, and a negative value if @var{op1} <
7042 @deftypefun void min (MINT *@var{dest})
7043 Input a decimal string from @code{stdin}, and put the read integer in
7044 @var{dest}. SPC and TAB are allowed in the number string, and are ignored.
7047 @deftypefun void mout (MINT *@var{src})
7048 Output @var{src} to @code{stdout}, as a decimal string. Also output a newline.
7051 @deftypefun {char *} mtox (MINT *@var{op})
7052 Convert @var{op} to a hexadecimal string, and return a pointer to the string.
7053 The returned string is allocated using the default memory allocation function,
7054 @code{malloc} by default. It will be @code{strlen(str)+1} bytes, that being
7055 exactly enough for the string and null-terminator.
7058 @deftypefun void mfree (MINT *@var{op})
7059 De-allocate, the space used by @var{op}. @strong{This function should only be
7060 passed a value returned by @code{itom} or @code{xtom}.}
7064 @node Custom Allocation, Language Bindings, BSD Compatible Functions, Top
7065 @comment node-name, next, previous, up
7066 @chapter Custom Allocation
7067 @cindex Custom allocation
7068 @cindex Memory allocation
7069 @cindex Allocation of memory
7071 By default GMP uses @code{malloc}, @code{realloc} and @code{free} for memory
7072 allocation, and if they fail GMP prints a message to the standard error output
7073 and terminates the program.
7075 Alternate functions can be specified, to allocate memory in a different way or
7076 to have a different error action on running out of memory.
7078 This feature is available in the Berkeley compatibility library (@pxref{BSD
7079 Compatible Functions}) as well as the main GMP library.
7081 @deftypefun void mp_set_memory_functions (@* void *(*@var{alloc_func_ptr}) (size_t), @* void *(*@var{realloc_func_ptr}) (void *, size_t, size_t), @* void (*@var{free_func_ptr}) (void *, size_t))
7082 Replace the current allocation functions from the arguments. If an argument
7083 is @code{NULL}, the corresponding default function is used.
7085 These functions will be used for all memory allocation done by GMP, apart from
7086 temporary space from @code{alloca} if that function is available and GMP is
7087 configured to use it (@pxref{Build Options}).
7089 @strong{Be sure to call @code{mp_set_memory_functions} only when there are no
7090 active GMP objects allocated using the previous memory functions! Usually
7091 that means calling it before any other GMP function.}
7094 The functions supplied should fit the following declarations:
7096 @deftypevr Function {void *} allocate_function (size_t @var{alloc_size})
7097 Return a pointer to newly allocated space with at least @var{alloc_size}
7101 @deftypevr Function {void *} reallocate_function (void *@var{ptr}, size_t @var{old_size}, size_t @var{new_size})
7102 Resize a previously allocated block @var{ptr} of @var{old_size} bytes to be
7103 @var{new_size} bytes.
7105 The block may be moved if necessary or if desired, and in that case the
7106 smaller of @var{old_size} and @var{new_size} bytes must be copied to the new
7107 location. The return value is a pointer to the resized block, that being the
7108 new location if moved or just @var{ptr} if not.
7110 @var{ptr} is never @code{NULL}, it's always a previously allocated block.
7111 @var{new_size} may be bigger or smaller than @var{old_size}.
7114 @deftypevr Function void free_function (void *@var{ptr}, size_t @var{size})
7115 De-allocate the space pointed to by @var{ptr}.
7117 @var{ptr} is never @code{NULL}, it's always a previously allocated block of
7121 A @dfn{byte} here means the unit used by the @code{sizeof} operator.
7123 The @var{old_size} parameters to @var{reallocate_function} and
7124 @var{free_function} are passed for convenience, but of course can be ignored
7125 if not needed. The default functions using @code{malloc} and friends for
7126 instance don't use them.
7128 No error return is allowed from any of these functions, if they return then
7129 they must have performed the specified operation. In particular note that
7130 @var{allocate_function} or @var{reallocate_function} mustn't return
7133 Getting a different fatal error action is a good use for custom allocation
7134 functions, for example giving a graphical dialog rather than the default print
7135 to @code{stderr}. How much is possible when genuinely out of memory is
7136 another question though.
7138 There's currently no defined way for the allocation functions to recover from
7139 an error such as out of memory, they must terminate program execution. A
7140 @code{longjmp} or throwing a C++ exception will have undefined results. This
7141 may change in the future.
7143 GMP may use allocated blocks to hold pointers to other allocated blocks. This
7144 will limit the assumptions a conservative garbage collection scheme can make.
7146 Since the default GMP allocation uses @code{malloc} and friends, those
7147 functions will be linked in even if the first thing a program does is an
7148 @code{mp_set_memory_functions}. It's necessary to change the GMP sources if
7152 @deftypefun void mp_get_memory_functions (@* void *(**@var{alloc_func_ptr}) (size_t), @* void *(**@var{realloc_func_ptr}) (void *, size_t, size_t), @* void (**@var{free_func_ptr}) (void *, size_t))
7153 Get the current allocation functions, storing function pointers to the
7154 locations given by the arguments. If an argument is @code{NULL}, that
7155 function pointer is not stored.
7158 For example, to get just the current free function,
7161 void (*freefunc) (void *, size_t);
7163 mp_get_memory_functions (NULL, NULL, &freefunc);
7167 @node Language Bindings, Algorithms, Custom Allocation, Top
7168 @chapter Language Bindings
7169 @cindex Language bindings
7170 @cindex Other languages
7172 The following packages and projects offer access to GMP from languages other
7173 than C, though perhaps with varying levels of functionality and efficiency.
7175 @c @spaceuref{U} is the same as @uref{U}, but with a couple of extra spaces
7176 @c in tex, just to separate the URL from the preceding text a bit.
7178 @macro spaceuref {U}
7183 @macro spaceuref {U}
7193 GMP C++ class interface, @pxref{C++ Class Interface} @* Straightforward
7194 interface, expression templates to eliminate temporaries.
7196 ALP @spaceuref{http://www-sop.inria.fr/saga/logiciels/ALP/} @* Linear algebra and
7197 polynomials using templates.
7199 Arithmos @spaceuref{http://www.win.ua.ac.be/~cant/arithmos/} @* Rationals
7200 with infinities and square roots.
7202 CLN @spaceuref{http://www.ginac.de/CLN/} @* High level classes for arithmetic.
7204 LiDIA @spaceuref{http://www.cdc.informatik.tu-darmstadt.de/TI/LiDIA/} @* A C++
7205 library for computational number theory.
7207 Linbox @spaceuref{http://www.linalg.org/} @* Sparse vectors and matrices.
7209 NTL @spaceuref{http://www.shoup.net/ntl/} @* A C++ number theory library.
7215 @c gmp-d @spaceuref{http://home.comcast.net/~benhinkle/gmp-d/}
7221 Omni F77 @spaceuref{http://phase.hpcc.jp/Omni/home.html} @* Arbitrary
7228 Glasgow Haskell Compiler @spaceuref{http://www.haskell.org/ghc/}
7234 Kaffe @spaceuref{http://www.kaffe.org/}
7236 Kissme @spaceuref{http://kissme.sourceforge.net/}
7242 GNU Common Lisp @spaceuref{http://www.gnu.org/software/gcl/gcl.html}
7244 Librep @spaceuref{http://librep.sourceforge.net/}
7246 @c FIXME: When there's a stable release with gmp support, just refer to it
7247 @c rather than bothering to talk about betas.
7248 XEmacs (21.5.18 beta and up) @spaceuref{http://www.xemacs.org} @* Optional
7249 big integers, rationals and floats using GMP.
7255 @c FIXME: When there's a stable release with gmp support, just refer to it
7256 @c rather than bothering to talk about betas.
7257 GNU m4 betas @spaceuref{http://www.seindal.dk/rene/gnu/} @* Optionally provides
7258 an arbitrary precision @code{mpeval}.
7264 MLton compiler @spaceuref{http://mlton.org/}
7267 @item Objective Caml
7270 MLGMP @spaceuref{http://www.di.ens.fr/~monniaux/programmes.html.en}
7272 Numerix @spaceuref{http://pauillac.inria.fr/~quercia/} @* Optionally using
7279 Mozart @spaceuref{http://www.mozart-oz.org/}
7285 GNU Pascal Compiler @spaceuref{http://www.gnu-pascal.de/} @* GMP unit.
7287 Numerix @spaceuref{http://pauillac.inria.fr/~quercia/} @* For Free Pascal,
7288 optionally using GMP.
7294 GMP module, see @file{demos/perl} in the GMP sources (@pxref{Demonstration
7297 Math::GMP @spaceuref{http://www.cpan.org/} @* Compatible with Math::BigInt, but
7298 not as many functions as the GMP module above.
7300 Math::BigInt::GMP @spaceuref{http://www.cpan.org/} @* Plug Math::GMP into
7301 normal Math::BigInt operations.
7308 mpz module in the standard distribution, @uref{http://pike.ida.liu.se/}
7315 SWI Prolog @spaceuref{http://www.swi-prolog.org/} @*
7316 Arbitrary precision floats.
7322 mpz module in the standard distribution, @uref{http://www.python.org/}
7324 GMPY @uref{http://gmpy.sourceforge.net/}
7330 GNU Guile (upcoming 1.8) @spaceuref{http://www.gnu.org/software/guile/guile.html}
7332 RScheme @spaceuref{http://www.rscheme.org/}
7334 STklos @spaceuref{http://www.stklos.org/}
7336 @c For reference, MzScheme uses some of gmp, but (as of version 205) it only
7337 @c has copies of some of the generic C code, and we don't consider that a
7338 @c language binding to gmp.
7345 GNU Smalltalk @spaceuref{http://www.smalltalk.org/versions/GNUSmalltalk.html}
7351 Axiom @uref{http://savannah.nongnu.org/projects/axiom} @* Computer algebra
7354 DrGenius @spaceuref{http://drgenius.seul.org/} @* Geometry system and
7355 mathematical programming language.
7357 GiNaC @spaceuref{http://www.ginac.de/} @* C++ computer algebra using CLN.
7359 GOO @spaceuref{http://www.googoogaga.org/} @* Dynamic object oriented
7362 Maxima @uref{http://www.ma.utexas.edu/users/wfs/maxima.html} @* Macsyma
7363 computer algebra using GCL.
7365 Q @spaceuref{http://q-lang.sourceforge.net/} @* Equational programming system.
7367 Regina @spaceuref{http://regina.sourceforge.net/} @* Topological calculator.
7369 Yacas @spaceuref{http://www.xs4all.nl/~apinkus/yacas.html} @* Yet another
7370 computer algebra system.
7376 @node Algorithms, Internals, Language Bindings, Top
7380 This chapter is an introduction to some of the algorithms used for various GMP
7381 operations. The code is likely to be hard to understand without knowing
7382 something about the algorithms.
7384 Some GMP internals are mentioned, but applications that expect to be
7385 compatible with future GMP releases should take care to use only the
7386 documented functions.
7389 * Multiplication Algorithms::
7390 * Division Algorithms::
7391 * Greatest Common Divisor Algorithms::
7392 * Powering Algorithms::
7393 * Root Extraction Algorithms::
7394 * Radix Conversion Algorithms::
7395 * Other Algorithms::
7400 @node Multiplication Algorithms, Division Algorithms, Algorithms, Algorithms
7401 @section Multiplication
7402 @cindex Multiplication algorithms
7404 N@cross{}N limb multiplications and squares are done using one of five
7405 algorithms, as the size N increases.
7408 @multitable {KaratsubaMMM} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
7409 @item Algorithm @tab Threshold
7410 @item Basecase @tab (none)
7411 @item Karatsuba @tab @code{MUL_TOOM22_THRESHOLD}
7412 @item Toom-3 @tab @code{MUL_TOOM33_THRESHOLD}
7413 @item Toom-4 @tab @code{MUL_TOOM44_THRESHOLD}
7414 @item FFT @tab @code{MUL_FFT_THRESHOLD}
7418 Similarly for squaring, with the @code{SQR} thresholds.
7420 N@cross{}M multiplications of operands with different sizes above
7421 @code{MUL_TOOM22_THRESHOLD} are currently done by special Toom-inspired
7422 algorithms or directly with FFT, depending on operand size (@pxref{Unbalanced
7426 * Basecase Multiplication::
7427 * Karatsuba Multiplication::
7428 * Toom 3-Way Multiplication::
7429 * Toom 4-Way Multiplication::
7430 * FFT Multiplication::
7431 * Other Multiplication::
7432 * Unbalanced Multiplication::
7436 @node Basecase Multiplication, Karatsuba Multiplication, Multiplication Algorithms, Multiplication Algorithms
7437 @subsection Basecase Multiplication
7439 Basecase N@cross{}M multiplication is a straightforward rectangular set of
7440 cross-products, the same as long multiplication done by hand and for that
7441 reason sometimes known as the schoolbook or grammar school method. This is an
7442 @m{O(NM),O(N*M)} algorithm. See Knuth section 4.3.1 algorithm M
7443 (@pxref{References}), and the @file{mpn/generic/mul_basecase.c} code.
7445 Assembly implementations of @code{mpn_mul_basecase} are essentially the same
7446 as the generic C code, but have all the usual assembly tricks and
7447 obscurities introduced for speed.
7449 A square can be done in roughly half the time of a multiply, by using the fact
7450 that the cross products above and below the diagonal are the same. A triangle
7451 of products below the diagonal is formed, doubled (left shift by one bit), and
7452 then the products on the diagonal added. This can be seen in
7453 @file{mpn/generic/sqr_basecase.c}. Again the assembly implementations take
7454 essentially the same approach.
7457 \def\GMPline#1#2#3#4#5#6{%
7459 \vrule height 2.5ex depth 1ex
7460 \hbox to 2em {\hfil{#2}\hfil}%
7461 \vrule \hbox to 2em {\hfil{#3}\hfil}%
7462 \vrule \hbox to 2em {\hfil{#4}\hfil}%
7463 \vrule \hbox to 2em {\hfil{#5}\hfil}%
7464 \vrule \hbox to 2em {\hfil{#6}\hfil}%
7469 \hbox to 1.5em {\vrule height 2.5ex depth 1ex width 0pt}%
7470 \hbox {\vrule height 2.5ex depth 1ex width 0pt u0\hfil}%
7471 \hbox {\vrule height 2.5ex depth 1ex width 0pt u1\hfil}%
7472 \hbox {\vrule height 2.5ex depth 1ex width 0pt u2\hfil}%
7473 \hbox {\vrule height 2.5ex depth 1ex width 0pt u3\hfil}%
7474 \hbox {\vrule height 2.5ex depth 1ex width 0pt u4\hfil}%
7478 \hbox to 2em {\hfil u0\hfil}%
7479 \hbox to 2em {\hfil u1\hfil}%
7480 \hbox to 2em {\hfil u2\hfil}%
7481 \hbox to 2em {\hfil u3\hfil}%
7482 \hbox to 2em {\hfil u4\hfil}}%
7485 \GMPline{u0}{d}{}{}{}{}%
7487 \GMPline{u1}{}{d}{}{}{}%
7489 \GMPline{u2}{}{}{d}{}{}%
7491 \GMPline{u3}{}{}{}{d}{}%
7493 \GMPline{u4}{}{}{}{}{d}%
7500 +---+---+---+---+---+
7502 +---+---+---+---+---+
7504 +---+---+---+---+---+
7506 +---+---+---+---+---+
7508 +---+---+---+---+---+
7510 +---+---+---+---+---+
7515 In practice squaring isn't a full 2@cross{} faster than multiplying, it's
7516 usually around 1.5@cross{}. Less than 1.5@cross{} probably indicates
7517 @code{mpn_sqr_basecase} wants improving on that CPU.
7519 On some CPUs @code{mpn_mul_basecase} can be faster than the generic C
7520 @code{mpn_sqr_basecase} on some small sizes. @code{SQR_BASECASE_THRESHOLD} is
7521 the size at which to use @code{mpn_sqr_basecase}, this will be zero if that
7522 routine should be used always.
7525 @node Karatsuba Multiplication, Toom 3-Way Multiplication, Basecase Multiplication, Multiplication Algorithms
7526 @subsection Karatsuba Multiplication
7527 @cindex Karatsuba multiplication
7529 The Karatsuba multiplication algorithm is described in Knuth section 4.3.3
7530 part A, and various other textbooks. A brief description is given here.
7532 The inputs @math{x} and @math{y} are treated as each split into two parts of
7533 equal length (or the most significant part one limb shorter if N is odd).
7536 % GMPboxwidth used for all the multiplication pictures
7537 \global\newdimen\GMPboxwidth \global\GMPboxwidth=5em
7538 % GMPboxdepth and GMPboxheight are also used for the float pictures
7539 \global\newdimen\GMPboxdepth \global\GMPboxdepth=1ex
7540 \global\newdimen\GMPboxheight \global\GMPboxheight=2ex
7541 \gdef\GMPvrule{\vrule height \GMPboxheight depth \GMPboxdepth}
7545 \hbox to 2\GMPboxwidth{%
7546 \GMPvrule \hfil $#1$\hfil \vrule \hfil $#2$\hfil \vrule}%
7550 \hbox to 2\GMPboxwidth {high \hfil low}
7561 +----------+----------+
7563 +----------+----------+
7565 +----------+----------+
7567 +----------+----------+
7572 Let @math{b} be the power of 2 where the split occurs, ie.@: if @ms{x,0} is
7573 @math{k} limbs (@ms{y,0} the same) then
7574 @m{b=2\GMPraise{$k*$@code{mp\_bits\_per\_limb}}, b=2^(k*mp_bits_per_limb)}.
7575 With that @m{x=x_1b+x_0,x=x1*b+x0} and @m{y=y_1b+y_0,y=y1*b+y0}, and the
7579 @m{xy = (b^2+b)x_1y_1 - b(x_1-x_0)(y_1-y_0) + (b+1)x_0y_0,
7580 x*y = (b^2+b)*x1*y1 - b*(x1-x0)*(y1-y0) + (b+1)*x0*y0}
7583 This formula means doing only three multiplies of (N/2)@cross{}(N/2) limbs,
7584 whereas a basecase multiply of N@cross{}N limbs is equivalent to four
7585 multiplies of (N/2)@cross{}(N/2). The factors @math{(b^2+b)} etc represent
7586 the positions where the three products must be added.
7594 \hbox to 2\GMPboxwidth {\hfil\hbox{$#1$}\hfil}%
7596 \hbox to 2\GMPboxwidth {\hfil\hbox{$#2$}\hfil}%
7601 \raise \GMPboxdepth \hbox to \GMPboxwidth {\hfil #1\hskip 0.5em}%
7606 \hbox to 2\GMPboxwidth {\hfil\hbox{$#2$}\hfil}%
7611 \hbox to 4\GMPboxwidth {high \hfil low}
7613 \GMPboxA{x_1y_1}{x_0y_0}
7615 \GMPboxB{$+$}{x_1y_1}
7617 \GMPboxB{$+$}{x_0y_0}
7619 \GMPboxB{$-$}{(x_1-x_0)(y_1-y_0)}
7626 +--------+--------+ +--------+--------+
7628 +--------+--------+ +--------+--------+
7636 sub | (x1-x0)*(y1-y0) |
7642 The term @m{(x_1-x_0)(y_1-y_0),(x1-x0)*(y1-y0)} is best calculated as an
7643 absolute value, and the sign used to choose to add or subtract. Notice the
7644 sum @m{\mathop{\rm high}(x_0y_0)+\mathop{\rm low}(x_1y_1),
7645 high(x0*y0)+low(x1*y1)} occurs twice, so it's possible to do @m{5k,5*k} limb
7646 additions, rather than @m{6k,6*k}, but in GMP extra function call overheads
7647 outweigh the saving.
7649 Squaring is similar to multiplying, but with @math{x=y} the formula reduces to
7650 an equivalent with three squares,
7653 @m{x^2 = (b^2+b)x_1^2 - b(x_1-x_0)^2 + (b+1)x_0^2,
7654 x^2 = (b^2+b)*x1^2 - b*(x1-x0)^2 + (b+1)*x0^2}
7657 The final result is accumulated from those three squares the same way as for
7658 the three multiplies above. The middle term @m{(x_1-x_0)^2,(x1-x0)^2} is now
7661 A similar formula for both multiplying and squaring can be constructed with a
7662 middle term @m{(x_1+x_0)(y_1+y_0),(x1+x0)*(y1+y0)}. But those sums can exceed
7663 @math{k} limbs, leading to more carry handling and additions than the form
7666 Karatsuba multiplication is asymptotically an @math{O(N^@W{1.585})} algorithm,
7667 the exponent being @m{\log3/\log2,log(3)/log(2)}, representing 3 multiplies
7668 each @math{1/2} the size of the inputs. This is a big improvement over the
7669 basecase multiply at @math{O(N^2)} and the advantage soon overcomes the extra
7670 additions Karatsuba performs. @code{MUL_TOOM22_THRESHOLD} can be as little
7671 as 10 limbs. The @code{SQR} threshold is usually about twice the @code{MUL}.
7673 The basecase algorithm will take a time of the form @m{M(N) = aN^2 + bN + c,
7674 M(N) = a*N^2 + b*N + c} and the Karatsuba algorithm @m{K(N) = 3M(N/2) + dN +
7675 e, K(N) = 3*M(N/2) + d*N + e}, which expands to @m{K(N) = {3\over4} aN^2 +
7676 {3\over2} bN + 3c + dN + e, K(N) = 3/4*a*N^2 + 3/2*b*N + 3*c + d*N + e}. The
7677 factor @m{3\over4, 3/4} for @math{a} means per-crossproduct speedups in the
7678 basecase code will increase the threshold since they benefit @math{M(N)} more
7679 than @math{K(N)}. And conversely the @m{3\over2, 3/2} for @math{b} means
7680 linear style speedups of @math{b} will increase the threshold since they
7681 benefit @math{K(N)} more than @math{M(N)}. The latter can be seen for
7682 instance when adding an optimized @code{mpn_sqr_diagonal} to
7683 @code{mpn_sqr_basecase}. Of course all speedups reduce total time, and in
7684 that sense the algorithm thresholds are merely of academic interest.
7687 @node Toom 3-Way Multiplication, Toom 4-Way Multiplication, Karatsuba Multiplication, Multiplication Algorithms
7688 @subsection Toom 3-Way Multiplication
7689 @cindex Toom multiplication
7691 The Karatsuba formula is the simplest case of a general approach to splitting
7692 inputs that leads to both Toom and FFT algorithms. A description of
7693 Toom can be found in Knuth section 4.3.3, with an example 3-way
7694 calculation after Theorem A@. The 3-way form used in GMP is described here.
7696 The operands are each considered split into 3 pieces of equal length (or the
7697 most significant part 1 or 2 limbs shorter than the other two).
7703 \hbox to 3\GMPboxwidth {%
7715 \hbox to 3\GMPboxwidth {high \hfil low}
7717 \GMPbox{x_2}{x_1}{x_0}
7719 \GMPbox{y_2}{y_1}{y_0}
7727 +----------+----------+----------+
7729 +----------+----------+----------+
7731 +----------+----------+----------+
7733 +----------+----------+----------+
7739 These parts are treated as the coefficients of two polynomials
7743 @m{X(t) = x_2t^2 + x_1t + x_0,
7744 X(t) = x2*t^2 + x1*t + x0}
7745 @m{Y(t) = y_2t^2 + y_1t + y_0,
7746 Y(t) = y2*t^2 + y1*t + y0}
7750 Let @math{b} equal the power of 2 which is the size of the @ms{x,0}, @ms{x,1},
7751 @ms{y,0} and @ms{y,1} pieces, ie.@: if they're @math{k} limbs each then
7752 @m{b=2\GMPraise{$k*$@code{mp\_bits\_per\_limb}}, b=2^(k*mp_bits_per_limb)}.
7753 With this @math{x=X(b)} and @math{y=Y(b)}.
7755 Let a polynomial @m{W(t)=X(t)Y(t),W(t)=X(t)*Y(t)} and suppose its coefficients
7759 @m{W(t) = w_4t^4 + w_3t^3 + w_2t^2 + w_1t + w_0,
7760 W(t) = w4*t^4 + w3*t^3 + w2*t^2 + w1*t + w0}
7763 The @m{w_i,w[i]} are going to be determined, and when they are they'll give
7764 the final result using @math{w=W(b)}, since
7765 @m{xy=X(b)Y(b),x*y=X(b)*Y(b)=W(b)}. The coefficients will be roughly
7766 @math{b^2} each, and the final @math{W(b)} will be an addition like,
7770 \moveright #1\GMPboxwidth
7775 \hbox to 2\GMPboxwidth {\hfil$#2$\hfil}%
7781 \hbox to 6\GMPboxwidth {high \hfil low}%
7817 The @m{w_i,w[i]} coefficients could be formed by a simple set of cross
7818 products, like @m{w_4=x_2y_2,w4=x2*y2}, @m{w_3=x_2y_1+x_1y_2,w3=x2*y1+x1*y2},
7819 @m{w_2=x_2y_0+x_1y_1+x_0y_2,w2=x2*y0+x1*y1+x0*y2} etc, but this would need all
7820 nine @m{x_iy_j,x[i]*y[j]} for @math{i,j=0,1,2}, and would be equivalent merely
7821 to a basecase multiply. Instead the following approach is used.
7823 @math{X(t)} and @math{Y(t)} are evaluated and multiplied at 5 points, giving
7824 values of @math{W(t)} at those points. In GMP the following points are used,
7827 @multitable {@m{t=\infty,t=inf}M} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
7828 @item Point @tab Value
7829 @item @math{t=0} @tab @m{x_0y_0,x0 * y0}, which gives @ms{w,0} immediately
7830 @item @math{t=1} @tab @m{(x_2+x_1+x_0)(y_2+y_1+y_0),(x2+x1+x0) * (y2+y1+y0)}
7831 @item @math{t=-1} @tab @m{(x_2-x_1+x_0)(y_2-y_1+y_0),(x2-x1+x0) * (y2-y1+y0)}
7832 @item @math{t=2} @tab @m{(4x_2+2x_1+x_0)(4y_2+2y_1+y_0),(4*x2+2*x1+x0) * (4*y2+2*y1+y0)}
7833 @item @m{t=\infty,t=inf} @tab @m{x_2y_2,x2 * y2}, which gives @ms{w,4} immediately
7837 At @math{t=-1} the values can be negative and that's handled using the
7838 absolute values and tracking the sign separately. At @m{t=\infty,t=inf} the
7839 value is actually @m{\lim_{t\to\infty} {X(t)Y(t)\over t^4}, X(t)*Y(t)/t^4 in
7840 the limit as t approaches infinity}, but it's much easier to think of as
7841 simply @m{x_2y_2,x2*y2} giving @ms{w,4} immediately (much like
7842 @m{x_0y_0,x0*y0} at @math{t=0} gives @ms{w,0} immediately).
7844 Each of the points substituted into
7845 @m{W(t)=w_4t^4+\cdots+w_0,W(t)=w4*t^4+@dots{}+w0} gives a linear combination
7846 of the @m{w_i,w[i]} coefficients, and the value of those combinations has just
7852 W(0) & = & & & & & & & & & w_0 \cr
7853 W(1) & = & w_4 & + & w_3 & + & w_2 & + & w_1 & + & w_0 \cr
7854 W(-1) & = & w_4 & - & w_3 & + & w_2 & - & w_1 & + & w_0 \cr
7855 W(2) & = & 16w_4 & + & 8w_3 & + & 4w_2 & + & 2w_1 & + & w_0 \cr
7856 W(\infty) & = & w_4 \cr
7863 W(1) = w4 + w3 + w2 + w1 + w0
7864 W(-1) = w4 - w3 + w2 - w1 + w0
7865 W(2) = 16*w4 + 8*w3 + 4*w2 + 2*w1 + w0
7871 This is a set of five equations in five unknowns, and some elementary linear
7872 algebra quickly isolates each @m{w_i,w[i]}. This involves adding or
7873 subtracting one @math{W(t)} value from another, and a couple of divisions by
7874 powers of 2 and one division by 3, the latter using the special
7875 @code{mpn_divexact_by3} (@pxref{Exact Division}).
7877 The conversion of @math{W(t)} values to the coefficients is interpolation. A
7878 polynomial of degree 4 like @math{W(t)} is uniquely determined by values known
7879 at 5 different points. The points are arbitrary and can be chosen to make the
7880 linear equations come out with a convenient set of steps for quickly isolating
7883 Squaring follows the same procedure as multiplication, but there's only one
7884 @math{X(t)} and it's evaluated at the 5 points, and those values squared to
7885 give values of @math{W(t)}. The interpolation is then identical, and in fact
7886 the same @code{toom3_interpolate} subroutine is used for both squaring and
7889 Toom-3 is asymptotically @math{O(N^@W{1.465})}, the exponent being
7890 @m{\log5/\log3,log(5)/log(3)}, representing 5 recursive multiplies of 1/3 the
7891 original size each. This is an improvement over Karatsuba at
7892 @math{O(N^@W{1.585})}, though Toom does more work in the evaluation and
7893 interpolation and so it only realizes its advantage above a certain size.
7895 Near the crossover between Toom-3 and Karatsuba there's generally a range of
7896 sizes where the difference between the two is small.
7897 @code{MUL_TOOM33_THRESHOLD} is a somewhat arbitrary point in that range and
7898 successive runs of the tune program can give different values due to small
7899 variations in measuring. A graph of time versus size for the two shows the
7900 effect, see @file{tune/README}.
7902 At the fairly small sizes where the Toom-3 thresholds occur it's worth
7903 remembering that the asymptotic behaviour for Karatsuba and Toom-3 can't be
7904 expected to make accurate predictions, due of course to the big influence of
7905 all sorts of overheads, and the fact that only a few recursions of each are
7906 being performed. Even at large sizes there's a good chance machine dependent
7907 effects like cache architecture will mean actual performance deviates from
7908 what might be predicted.
7910 The formula given for the Karatsuba algorithm (@pxref{Karatsuba
7911 Multiplication}) has an equivalent for Toom-3 involving only five multiplies,
7912 but this would be complicated and unenlightening.
7914 An alternate view of Toom-3 can be found in Zuras (@pxref{References}), using
7915 a vector to represent the @math{x} and @math{y} splits and a matrix
7916 multiplication for the evaluation and interpolation stages. The matrix
7917 inverses are not meant to be actually used, and they have elements with values
7918 much greater than in fact arise in the interpolation steps. The diagram shown
7919 for the 3-way is attractive, but again doesn't have to be implemented that way
7920 and for example with a bit of rearrangement just one division by 6 can be
7924 @node Toom 4-Way Multiplication, FFT Multiplication, Toom 3-Way Multiplication, Multiplication Algorithms
7925 @subsection Toom 4-Way Multiplication
7926 @cindex Toom multiplication
7928 Karatsuba and Toom-3 split the operands into 2 and 3 coefficients,
7929 respectively. Toom-4 analogously splits the operands into 4 coefficients.
7930 Using the notation from the section on Toom-3 multiplication, we form two
7935 @m{X(t) = x_3t^3 + x_2t^2 + x_1t + x_0,
7936 X(t) = x3*t^3 + x2*t^2 + x1*t + x0}
7937 @m{Y(t) = y_3t^3 + y_2t^2 + y_1t + y_0,
7938 Y(t) = y3*t^3 + y2*t^2 + y1*t + y0}
7942 @math{X(t)} and @math{Y(t)} are evaluated and multiplied at 7 points, giving
7943 values of @math{W(t)} at those points. In GMP the following points are used,
7946 @multitable {@m{t=-1/2,t=inf}M} {MMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMMM}
7947 @item Point @tab Value
7948 @item @math{t=0} @tab @m{x_0y_0,x0 * y0}, which gives @ms{w,0} immediately
7949 @item @math{t=1/2} @tab @m{(x_3+2x_2+4x_1+8x_0)(y_3+2y_2+4y_1+8y_0),(x3+2*x2+4*x1+8*x0) * (y3+2*y2+4*y1+8*y0)}
7950 @item @math{t=-1/2} @tab @m{(-x_3+2x_2-4x_1+8x_0)(-y_3+2y_2-4y_1+8y_0),(-x3+2*x2-4*x1+8*x0) * (-y3+2*y2-4*y1+8*y0)}
7951 @item @math{t=1} @tab @m{(x_3+x_2+x_1+x_0)(y_3+y_2+y_1+y_0),(x3+x2+x1+x0) * (y3+y2+y1+y0)}
7952 @item @math{t=-1} @tab @m{(-x_3+x_2-x_1+x_0)(-y_3+y_2-y_1+y_0),(-x3+x2-x1+x0) * (-y3+y2-y1+y0)}
7953 @item @math{t=2} @tab @m{(8x_3+4x_2+2x_1+x_0)(8y_3+4y_2+2y_1+y_0),(8*x3+4*x2+2*x1+x0) * (8*y3+4*y2+2*y1+y0)}
7954 @item @m{t=\infty,t=inf} @tab @m{x_3y_3,x3 * y3}, which gives @ms{w,6} immediately
7958 The number of additions and subtractions for Toom-4 is much larger than for Toom-3.
7959 But several subexpressions occur multiple times, for example @m{x_2+x_0,x2+x0}, occurs
7960 for both @math{t=1} and @math{t=-1}.
7962 Toom-4 is asymptotically @math{O(N^@W{1.404})}, the exponent being
7963 @m{\log7/\log4,log(7)/log(4)}, representing 7 recursive multiplies of 1/4 the
7967 @node FFT Multiplication, Other Multiplication, Toom 4-Way Multiplication, Multiplication Algorithms
7968 @subsection FFT Multiplication
7969 @cindex FFT multiplication
7970 @cindex Fast Fourier Transform
7972 At large to very large sizes a Fermat style FFT multiplication is used,
7973 following Sch@"onhage and Strassen (@pxref{References}). Descriptions of FFTs
7974 in various forms can be found in many textbooks, for instance Knuth section
7975 4.3.3 part C or Lipson chapter IX@. A brief description of the form used in
7978 The multiplication done is @m{xy \bmod 2^N+1, x*y mod 2^N+1}, for a given
7979 @math{N}. A full product @m{xy,x*y} is obtained by choosing @m{N \ge
7980 \mathop{\rm bits}(x)+\mathop{\rm bits}(y), N>=bits(x)+bits(y)} and padding
7981 @math{x} and @math{y} with high zero limbs. The modular product is the native
7982 form for the algorithm, so padding to get a full product is unavoidable.
7984 The algorithm follows a split, evaluate, pointwise multiply, interpolate and
7985 combine similar to that described above for Karatsuba and Toom-3. A @math{k}
7986 parameter controls the split, with an FFT-@math{k} splitting into @math{2^k}
7987 pieces of @math{M=N/2^k} bits each. @math{N} must be a multiple of
7988 @m{2^k\times@code{mp\_bits\_per\_limb}, (2^k)*@nicode{mp_bits_per_limb}} so
7989 the split falls on limb boundaries, avoiding bit shifts in the split and
7992 The evaluations, pointwise multiplications, and interpolation, are all done
7993 modulo @m{2^{N'}+1, 2^N'+1} where @math{N'} is @math{2M+k+3} rounded up to a
7994 multiple of @math{2^k} and of @code{mp_bits_per_limb}. The results of
7995 interpolation will be the following negacyclic convolution of the input
7996 pieces, and the choice of @math{N'} ensures these sums aren't truncated.
7998 $$ w_n = \sum_{{i+j = b2^k+n}\atop{b=0,1}} (-1)^b x_i y_j $$
8005 w[n] = / (-1) * x[i] * y[j]
8012 The points used for the evaluation are @math{g^i} for @math{i=0} to
8013 @math{2^k-1} where @m{g=2^{2N'/2^k}, g=2^(2N'/2^k)}. @math{g} is a
8014 @m{2^k,2^k'}th root of unity mod @m{2^{N'}+1,2^N'+1}, which produces necessary
8015 cancellations at the interpolation stage, and it's also a power of 2 so the
8016 fast Fourier transforms used for the evaluation and interpolation do only
8017 shifts, adds and negations.
8019 The pointwise multiplications are done modulo @m{2^{N'}+1, 2^N'+1} and either
8020 recurse into a further FFT or use a plain multiplication (Toom-3, Karatsuba or
8021 basecase), whichever is optimal at the size @math{N'}. The interpolation is
8022 an inverse fast Fourier transform. The resulting set of sums of @m{x_iy_j,
8023 x[i]*y[j]} are added at appropriate offsets to give the final result.
8025 Squaring is the same, but @math{x} is the only input so it's one transform at
8026 the evaluate stage and the pointwise multiplies are squares. The
8027 interpolation is the same.
8029 For a mod @math{2^N+1} product, an FFT-@math{k} is an @m{O(N^{k/(k-1)}),
8030 O(N^(k/(k-1)))} algorithm, the exponent representing @math{2^k} recursed
8031 modular multiplies each @m{1/2^{k-1},1/2^(k-1)} the size of the original.
8032 Each successive @math{k} is an asymptotic improvement, but overheads mean each
8033 is only faster at bigger and bigger sizes. In the code, @code{MUL_FFT_TABLE}
8034 and @code{SQR_FFT_TABLE} are the thresholds where each @math{k} is used. Each
8035 new @math{k} effectively swaps some multiplying for some shifts, adds and
8038 A mod @math{2^N+1} product can be formed with a normal
8039 @math{N@cross{}N@rightarrow{}2N} bit multiply plus a subtraction, so an FFT
8040 and Toom-3 etc can be compared directly. A @math{k=4} FFT at
8041 @math{O(N^@W{1.333})} can be expected to be the first faster than Toom-3 at
8042 @math{O(N^@W{1.465})}. In practice this is what's found, with
8043 @code{MUL_FFT_MODF_THRESHOLD} and @code{SQR_FFT_MODF_THRESHOLD} being between
8044 300 and 1000 limbs, depending on the CPU@. So far it's been found that only
8045 very large FFTs recurse into pointwise multiplies above these sizes.
8047 When an FFT is to give a full product, the change of @math{N} to @math{2N}
8048 doesn't alter the theoretical complexity for a given @math{k}, but for the
8049 purposes of considering where an FFT might be first used it can be assumed
8050 that the FFT is recursing into a normal multiply and that on that basis it's
8051 doing @math{2^k} recursed multiplies each @m{1/2^{k-2},1/2^(k-2)} the size of
8052 the inputs, making it @m{O(N^{k/(k-2)}), O(N^(k/(k-2)))}. This would mean
8053 @math{k=7} at @math{O(N^@W{1.4})} would be the first FFT faster than Toom-3.
8054 In practice @code{MUL_FFT_THRESHOLD} and @code{SQR_FFT_THRESHOLD} have been
8055 found to be in the @math{k=8} range, somewhere between 3000 and 10000 limbs.
8057 The way @math{N} is split into @math{2^k} pieces and then @math{2M+k+3} is
8058 rounded up to a multiple of @math{2^k} and @code{mp_bits_per_limb} means that
8059 when @math{2^k@ge{}@nicode{mp\_bits\_per\_limb}} the effective @math{N} is a
8060 multiple of @m{2^{2k-1},2^(2k-1)} bits. The @math{+k+3} means some values of
8061 @math{N} just under such a multiple will be rounded to the next. The
8062 complexity calculations above assume that a favourable size is used, meaning
8063 one which isn't padded through rounding, and it's also assumed that the extra
8064 @math{+k+3} bits are negligible at typical FFT sizes.
8066 The practical effect of the @m{2^{2k-1},2^(2k-1)} constraint is to introduce a
8067 step-effect into measured speeds. For example @math{k=8} will round @math{N}
8068 up to a multiple of 32768 bits, so for a 32-bit limb there'll be 512 limb
8069 groups of sizes for which @code{mpn_mul_n} runs at the same speed. Or for
8070 @math{k=9} groups of 2048 limbs, @math{k=10} groups of 8192 limbs, etc. In
8071 practice it's been found each @math{k} is used at quite small multiples of its
8072 size constraint and so the step effect is quite noticeable in a time versus
8075 The threshold determinations currently measure at the mid-points of size
8076 steps, but this is sub-optimal since at the start of a new step it can happen
8077 that it's better to go back to the previous @math{k} for a while. Something
8078 more sophisticated for @code{MUL_FFT_TABLE} and @code{SQR_FFT_TABLE} will be
8082 @node Other Multiplication, Unbalanced Multiplication, FFT Multiplication, Multiplication Algorithms
8083 @subsection Other Multiplication
8084 @cindex Toom multiplication
8086 The Toom algorithms described above (@pxref{Toom 3-Way Multiplication},
8087 @pxref{Toom 4-Way Multiplication}) generalizes to split into an arbitrary
8088 number of pieces, as per Knuth section 4.3.3 algorithm C@. This is not
8089 currently used. The notes here are merely for interest.
8091 In general a split into @math{r+1} pieces is made, and evaluations and
8092 pointwise multiplications done at @m{2r+1,2*r+1} points. A 4-way split does 7
8093 pointwise multiplies, 5-way does 9, etc. Asymptotically an @math{(r+1)}-way
8094 algorithm is @m{O(N^{log(2r+1)/log(r+1)}, O(N^(log(2*r+1)/log(r+1)))}. Only
8095 the pointwise multiplications count towards big-@math{O} complexity, but the
8096 time spent in the evaluate and interpolate stages grows with @math{r} and has
8097 a significant practical impact, with the asymptotic advantage of each @math{r}
8098 realized only at bigger and bigger sizes. The overheads grow as
8099 @m{O(Nr),O(N*r)}, whereas in an @math{r=2^k} FFT they grow only as @m{O(N \log
8102 Knuth algorithm C evaluates at points 0,1,2,@dots{},@m{2r,2*r}, but exercise 4
8103 uses @math{-r},@dots{},0,@dots{},@math{r} and the latter saves some small
8104 multiplies in the evaluate stage (or rather trades them for additions), and
8105 has a further saving of nearly half the interpolate steps. The idea is to
8106 separate odd and even final coefficients and then perform algorithm C steps C7
8107 and C8 on them separately. The divisors at step C7 become @math{j^2} and the
8108 multipliers at C8 become @m{2tj-j^2,2*t*j-j^2}.
8110 Splitting odd and even parts through positive and negative points can be
8111 thought of as using @math{-1} as a square root of unity. If a 4th root of
8112 unity was available then a further split and speedup would be possible, but no
8113 such root exists for plain integers. Going to complex integers with
8114 @m{i=\sqrt{-1}, i=sqrt(-1)} doesn't help, essentially because in Cartesian
8115 form it takes three real multiplies to do a complex multiply. The existence
8116 of @m{2^k,2^k'}th roots of unity in a suitable ring or field lets the fast
8117 Fourier transform keep splitting and get to @m{O(N \log r), O(N*log(r))}.
8119 Floating point FFTs use complex numbers approximating Nth roots of unity.
8120 Some processors have special support for such FFTs. But these are not used in
8121 GMP since it's very difficult to guarantee an exact result (to some number of
8122 bits). An occasional difference of 1 in the last bit might not matter to a
8123 typical signal processing algorithm, but is of course of vital importance to
8127 @node Unbalanced Multiplication, , Other Multiplication, Multiplication Algorithms
8128 @subsection Unbalanced Multiplication
8129 @cindex Unbalanced multiplication
8131 Multiplication of operands with different sizes, both below
8132 @code{MUL_TOOM22_THRESHOLD} are done with plain schoolbook multiplication
8133 (@pxref{Basecase Multiplication}).
8135 For really large operands, we invoke FFT directly.
8137 For operands between these sizes, we use Toom inspired algorithms suggested by
8138 Alberto Zanoni and Marco Bodrato. The idea is to split the operands into
8139 polynomials of different degree. GMP currently splits the smaller operand
8140 onto 2 coefficients, i.e., a polynomial of degree 1, but the larger operand
8141 can be split into 2, 3, or 4 coefficients, i.e., a polynomial of degree 1 to
8144 @c FIXME: This is mighty ugly, but a cleaner @need triggers texinfo bugs that
8145 @c screws up layout here and there in the rest of the manual.
8149 @node Division Algorithms, Greatest Common Divisor Algorithms, Multiplication Algorithms, Algorithms
8150 @section Division Algorithms
8151 @cindex Division algorithms
8154 * Single Limb Division::
8155 * Basecase Division::
8156 * Divide and Conquer Division::
8157 * Block-Wise Barrett Division::
8160 * Small Quotient Division::
8164 @node Single Limb Division, Basecase Division, Division Algorithms, Division Algorithms
8165 @subsection Single Limb Division
8167 N@cross{}1 division is implemented using repeated 2@cross{}1 divisions from
8168 high to low, either with a hardware divide instruction or a multiplication by
8169 inverse, whichever is best on a given CPU.
8171 The multiply by inverse follows ``Improved division by invariant integers'' by
8172 M@"oller and Granlund (@pxref{References}) and is implemented as
8173 @code{udiv_qrnnd_preinv} in @file{gmp-impl.h}. The idea is to have a
8174 fixed-point approximation to @math{1/d} (see @code{invert_limb}) and then
8175 multiply by the high limb (plus one bit) of the dividend to get a quotient
8176 @math{q}. With @math{d} normalized (high bit set), @math{q} is no more than 1
8177 too small. Subtracting @m{qd,q*d} from the dividend gives a remainder, and
8178 reveals whether @math{q} or @math{q-1} is correct.
8180 The result is a division done with two multiplications and four or five
8181 arithmetic operations. On CPUs with low latency multipliers this can be much
8182 faster than a hardware divide, though the cost of calculating the inverse at
8183 the start may mean it's only better on inputs bigger than say 4 or 5 limbs.
8185 When a divisor must be normalized, either for the generic C
8186 @code{__udiv_qrnnd_c} or the multiply by inverse, the division performed is
8187 actually @m{a2^k,a*2^k} by @m{d2^k,d*2^k} where @math{a} is the dividend and
8188 @math{k} is the power necessary to have the high bit of @m{d2^k,d*2^k} set.
8189 The bit shifts for the dividend are usually accomplished ``on the fly''
8190 meaning by extracting the appropriate bits at each step. Done this way the
8191 quotient limbs come out aligned ready to store. When only the remainder is
8192 wanted, an alternative is to take the dividend limbs unshifted and calculate
8193 @m{r = a \bmod d2^k, r = a mod d*2^k} followed by an extra final step @m{r2^k
8194 \bmod d2^k, r*2^k mod d*2^k}. This can help on CPUs with poor bit shifts or
8197 The multiply by inverse can be done two limbs at a time. The calculation is
8198 basically the same, but the inverse is two limbs and the divisor treated as if
8199 padded with a low zero limb. This means more work, since the inverse will
8200 need a 2@cross{}2 multiply, but the four 1@cross{}1s to do that are
8201 independent and can therefore be done partly or wholly in parallel. Likewise
8202 for a 2@cross{}1 calculating @m{qd,q*d}. The net effect is to process two
8203 limbs with roughly the same two multiplies worth of latency that one limb at a
8204 time gives. This extends to 3 or 4 limbs at a time, though the extra work to
8205 apply the inverse will almost certainly soon reach the limits of multiplier
8208 A similar approach in reverse can be taken to process just half a limb at a
8209 time if the divisor is only a half limb. In this case the 1@cross{}1 multiply
8210 for the inverse effectively becomes two @m{{1\over2}\times1, (1/2)x1} for each
8211 limb, which can be a saving on CPUs with a fast half limb multiply, or in fact
8212 if the only multiply is a half limb, and especially if it's not pipelined.
8215 @node Basecase Division, Divide and Conquer Division, Single Limb Division, Division Algorithms
8216 @subsection Basecase Division
8218 Basecase N@cross{}M division is like long division done by hand, but in base
8219 @m{2\GMPraise{@code{mp\_bits\_per\_limb}}, 2^mp_bits_per_limb}. See Knuth
8220 section 4.3.1 algorithm D, and @file{mpn/generic/sb_divrem_mn.c}.
8222 Briefly stated, while the dividend remains larger than the divisor, a high
8223 quotient limb is formed and the N@cross{}1 product @m{qd,q*d} subtracted at
8224 the top end of the dividend. With a normalized divisor (most significant bit
8225 set), each quotient limb can be formed with a 2@cross{}1 division and a
8226 1@cross{}1 multiplication plus some subtractions. The 2@cross{}1 division is
8227 by the high limb of the divisor and is done either with a hardware divide or a
8228 multiply by inverse (the same as in @ref{Single Limb Division}) whichever is
8229 faster. Such a quotient is sometimes one too big, requiring an addback of the
8230 divisor, but that happens rarely.
8232 With Q=N@minus{}M being the number of quotient limbs, this is an
8233 @m{O(QM),O(Q*M)} algorithm and will run at a speed similar to a basecase
8234 Q@cross{}M multiplication, differing in fact only in the extra multiply and
8235 divide for each of the Q quotient limbs.
8238 @node Divide and Conquer Division, Block-Wise Barrett Division, Basecase Division, Division Algorithms
8239 @subsection Divide and Conquer Division
8241 For divisors larger than @code{DC_DIV_QR_THRESHOLD}, division is done by dividing.
8242 Or to be precise by a recursive divide and conquer algorithm based on work by
8243 Moenck and Borodin, Jebelean, and Burnikel and Ziegler (@pxref{References}).
8245 The algorithm consists essentially of recognising that a 2N@cross{}N division
8246 can be done with the basecase division algorithm (@pxref{Basecase Division}),
8247 but using N/2 limbs as a base, not just a single limb. This way the
8248 multiplications that arise are (N/2)@cross{}(N/2) and can take advantage of
8249 Karatsuba and higher multiplication algorithms (@pxref{Multiplication
8250 Algorithms}). The two ``digits'' of the quotient are formed by recursive
8251 N@cross{}(N/2) divisions.
8253 If the (N/2)@cross{}(N/2) multiplies are done with a basecase multiplication
8254 then the work is about the same as a basecase division, but with more function
8255 call overheads and with some subtractions separated from the multiplies.
8256 These overheads mean that it's only when N/2 is above
8257 @code{MUL_TOOM22_THRESHOLD} that divide and conquer is of use.
8259 @code{DC_DIV_QR_THRESHOLD} is based on the divisor size N, so it will be somewhere
8260 above twice @code{MUL_TOOM22_THRESHOLD}, but how much above depends on the
8261 CPU@. An optimized @code{mpn_mul_basecase} can lower @code{DC_DIV_QR_THRESHOLD} a
8262 little by offering a ready-made advantage over repeated @code{mpn_submul_1}
8265 Divide and conquer is asymptotically @m{O(M(N)\log N),O(M(N)*log(N))} where
8266 @math{M(N)} is the time for an N@cross{}N multiplication done with FFTs. The
8267 actual time is a sum over multiplications of the recursed sizes, as can be
8268 seen near the end of section 2.2 of Burnikel and Ziegler. For example, within
8269 the Toom-3 range, divide and conquer is @m{2.63M(N), 2.63*M(N)}. With higher
8270 algorithms the @math{M(N)} term improves and the multiplier tends to @m{\log
8271 N, log(N)}. In practice, at moderate to large sizes, a 2N@cross{}N division
8272 is about 2 to 4 times slower than an N@cross{}N multiplication.
8275 @node Block-Wise Barrett Division, Exact Division, Divide and Conquer Division, Division Algorithms
8276 @subsection Block-Wise Barrett Division
8278 For the largest divisions, a block-wise Barrett division algorithm is used.
8279 Here, the divisor is inverted to a precision determined by the relative size of
8280 the dividend and divisor. Blocks of quotient limbs are then generated by
8281 multiplying blocks from the dividend by the inverse.
8283 Our block-wise algorithm computes a smaller inverse than in the plain Barrett
8284 algorithm. For a @math{2n/n} division, the inverse will be just @m{\lceil n/2
8285 \rceil, ceil(n/2)} limbs.
8288 @node Exact Division, Exact Remainder, Block-Wise Barrett Division, Division Algorithms
8289 @subsection Exact Division
8292 A so-called exact division is when the dividend is known to be an exact
8293 multiple of the divisor. Jebelean's exact division algorithm uses this
8294 knowledge to make some significant optimizations (@pxref{References}).
8296 The idea can be illustrated in decimal for example with 368154 divided by
8297 543. Because the low digit of the dividend is 4, the low digit of the
8298 quotient must be 8. This is arrived at from @m{4 \mathord{\times} 7 \bmod 10,
8299 4*7 mod 10}, using the fact 7 is the modular inverse of 3 (the low digit of
8300 the divisor), since @m{3 \mathord{\times} 7 \mathop{\equiv} 1 \bmod 10, 3*7
8301 @equiv{} 1 mod 10}. So @m{8\mathord{\times}543 = 4344,8*543=4344} can be
8302 subtracted from the dividend leaving 363810. Notice the low digit has become
8305 The procedure is repeated at the second digit, with the next quotient digit 7
8306 (@m{1 \mathord{\times} 7 \bmod 10, 7 @equiv{} 1*7 mod 10}), subtracting
8307 @m{7\mathord{\times}543 = 3801,7*543=3801}, leaving 325800. And finally at
8308 the third digit with quotient digit 6 (@m{8 \mathord{\times} 7 \bmod 10, 8*7
8309 mod 10}), subtracting @m{6\mathord{\times}543 = 3258,6*543=3258} leaving 0.
8310 So the quotient is 678.
8312 Notice however that the multiplies and subtractions don't need to extend past
8313 the low three digits of the dividend, since that's enough to determine the
8314 three quotient digits. For the last quotient digit no subtraction is needed
8315 at all. On a 2N@cross{}N division like this one, only about half the work of
8316 a normal basecase division is necessary.
8318 For an N@cross{}M exact division producing Q=N@minus{}M quotient limbs, the
8319 saving over a normal basecase division is in two parts. Firstly, each of the
8320 Q quotient limbs needs only one multiply, not a 2@cross{}1 divide and
8321 multiply. Secondly, the crossproducts are reduced when @math{Q>M} to
8322 @m{QM-M(M+1)/2,Q*M-M*(M+1)/2}, or when @math{Q@le{}M} to @m{Q(Q-1)/2,
8323 Q*(Q-1)/2}. Notice the savings are complementary. If Q is big then many
8324 divisions are saved, or if Q is small then the crossproducts reduce to a small
8327 The modular inverse used is calculated efficiently by @code{binvert_limb} in
8328 @file{gmp-impl.h}. This does four multiplies for a 32-bit limb, or six for a
8329 64-bit limb. @file{tune/modlinv.c} has some alternate implementations that
8330 might suit processors better at bit twiddling than multiplying.
8332 The sub-quadratic exact division described by Jebelean in ``Exact Division
8333 with Karatsuba Complexity'' is not currently implemented. It uses a
8334 rearrangement similar to the divide and conquer for normal division
8335 (@pxref{Divide and Conquer Division}), but operating from low to high. A
8336 further possibility not currently implemented is ``Bidirectional Exact Integer
8337 Division'' by Krandick and Jebelean which forms quotient limbs from both the
8338 high and low ends of the dividend, and can halve once more the number of
8339 crossproducts needed in a 2N@cross{}N division.
8341 A special case exact division by 3 exists in @code{mpn_divexact_by3},
8342 supporting Toom-3 multiplication and @code{mpq} canonicalizations. It forms
8343 quotient digits with a multiply by the modular inverse of 3 (which is
8344 @code{0xAA..AAB}) and uses two comparisons to determine a borrow for the next
8345 limb. The multiplications don't need to be on the dependent chain, as long as
8346 the effect of the borrows is applied, which can help chips with pipelined
8350 @node Exact Remainder, Small Quotient Division, Exact Division, Division Algorithms
8351 @subsection Exact Remainder
8352 @cindex Exact remainder
8354 If the exact division algorithm is done with a full subtraction at each stage
8355 and the dividend isn't a multiple of the divisor, then low zero limbs are
8356 produced but with a remainder in the high limbs. For dividend @math{a},
8357 divisor @math{d}, quotient @math{q}, and @m{b = 2
8358 \GMPraise{@code{mp\_bits\_per\_limb}}, b = 2^mp_bits_per_limb}, this remainder
8359 @math{r} is of the form
8361 $$ a = qd + r b^n $$
8370 @math{n} represents the number of zero limbs produced by the subtractions,
8371 that being the number of limbs produced for @math{q}. @math{r} will be in the
8372 range @math{0@le{}r<d} and can be viewed as a remainder, but one shifted up by
8373 a factor of @math{b^n}.
8375 Carrying out full subtractions at each stage means the same number of cross
8376 products must be done as a normal division, but there's still some single limb
8377 divisions saved. When @math{d} is a single limb some simplifications arise,
8378 providing good speedups on a number of processors.
8380 @code{mpn_divexact_by3}, @code{mpn_modexact_1_odd} and the @code{mpn_redc_X}
8381 functions differ subtly in how they return @math{r}, leading to some negations
8382 in the above formula, but all are essentially the same.
8384 @cindex Divisibility algorithm
8385 @cindex Congruence algorithm
8386 Clearly @math{r} is zero when @math{a} is a multiple of @math{d}, and this
8387 leads to divisibility or congruence tests which are potentially more efficient
8388 than a normal division.
8390 The factor of @math{b^n} on @math{r} can be ignored in a GCD when @math{d} is
8391 odd, hence the use of @code{mpn_modexact_1_odd} by @code{mpn_gcd_1} and
8392 @code{mpz_kronecker_ui} etc (@pxref{Greatest Common Divisor Algorithms}).
8394 Montgomery's REDC method for modular multiplications uses operands of the form
8395 of @m{xb^{-n}, x*b^-n} and @m{yb^{-n}, y*b^-n} and on calculating @m{(xb^{-n})
8396 (yb^{-n}), (x*b^-n)*(y*b^-n)} uses the factor of @math{b^n} in the exact
8397 remainder to reach a product in the same form @m{(xy)b^{-n}, (x*y)*b^-n}
8398 (@pxref{Modular Powering Algorithm}).
8400 Notice that @math{r} generally gives no useful information about the ordinary
8401 remainder @math{a @bmod d} since @math{b^n @bmod d} could be anything. If
8402 however @math{b^n @equiv{} 1 @bmod d}, then @math{r} is the negative of the
8403 ordinary remainder. This occurs whenever @math{d} is a factor of
8404 @math{b^n-1}, as for example with 3 in @code{mpn_divexact_by3}. For a 32 or
8405 64 bit limb other such factors include 5, 17 and 257, but no particular use
8406 has been found for this.
8409 @node Small Quotient Division, , Exact Remainder, Division Algorithms
8410 @subsection Small Quotient Division
8412 An N@cross{}M division where the number of quotient limbs Q=N@minus{}M is
8413 small can be optimized somewhat.
8415 An ordinary basecase division normalizes the divisor by shifting it to make
8416 the high bit set, shifting the dividend accordingly, and shifting the
8417 remainder back down at the end of the calculation. This is wasteful if only a
8418 few quotient limbs are to be formed. Instead a division of just the top
8419 @m{\rm2Q,2*Q} limbs of the dividend by the top Q limbs of the divisor can be
8420 used to form a trial quotient. This requires only those limbs normalized, not
8421 the whole of the divisor and dividend.
8423 A multiply and subtract then applies the trial quotient to the M@minus{}Q
8424 unused limbs of the divisor and N@minus{}Q dividend limbs (which includes Q
8425 limbs remaining from the trial quotient division). The starting trial
8426 quotient can be 1 or 2 too big, but all cases of 2 too big and most cases of 1
8427 too big are detected by first comparing the most significant limbs that will
8428 arise from the subtraction. An addback is done if the quotient still turns
8429 out to be 1 too big.
8431 This whole procedure is essentially the same as one step of the basecase
8432 algorithm done in a Q limb base, though with the trial quotient test done only
8433 with the high limbs, not an entire Q limb ``digit'' product. The correctness
8434 of this weaker test can be established by following the argument of Knuth
8435 section 4.3.1 exercise 20 but with the @m{v_2 \GMPhat q > b \GMPhat r
8436 + u_2, v2*q>b*r+u2} condition appropriately relaxed.
8440 @node Greatest Common Divisor Algorithms, Powering Algorithms, Division Algorithms, Algorithms
8441 @section Greatest Common Divisor
8442 @cindex Greatest common divisor algorithms
8443 @cindex GCD algorithms
8447 * Lehmer's Algorithm::
8448 * Subquadratic GCD::
8454 @node Binary GCD, Lehmer's Algorithm, Greatest Common Divisor Algorithms, Greatest Common Divisor Algorithms
8455 @subsection Binary GCD
8457 At small sizes GMP uses an @math{O(N^2)} binary style GCD@. This is described
8458 in many textbooks, for example Knuth section 4.5.2 algorithm B@. It simply
8459 consists of successively reducing odd operands @math{a} and @math{b} using
8462 @math{a,b = @abs{}(a-b),@min{}(a,b)} @*
8463 strip factors of 2 from @math{a}
8466 The Euclidean GCD algorithm, as per Knuth algorithms E and A, repeatedly
8467 computes the quotient @m{q = \lfloor a/b \rfloor, q = floor(a/b)} and replaces
8468 @math{a,b} by @math{v, u - q v}. The binary algorithm has so far been found to
8469 be faster than the Euclidean algorithm everywhere. One reason the binary
8470 method does well is that the implied quotient at each step is usually small,
8471 so often only one or two subtractions are needed to get the same effect as a
8472 division. Quotients 1, 2 and 3 for example occur 67.7% of the time, see Knuth
8473 section 4.5.3 Theorem E.
8475 When the implied quotient is large, meaning @math{b} is much smaller than
8476 @math{a}, then a division is worthwhile. This is the basis for the initial
8477 @math{a @bmod b} reductions in @code{mpn_gcd} and @code{mpn_gcd_1} (the latter
8478 for both N@cross{}1 and 1@cross{}1 cases). But after that initial reduction,
8479 big quotients occur too rarely to make it worth checking for them.
8482 The final @math{1@cross{}1} GCD in @code{mpn_gcd_1} is done in the generic C
8483 code as described above. For two N-bit operands, the algorithm takes about
8484 0.68 iterations per bit. For optimum performance some attention needs to be
8485 paid to the way the factors of 2 are stripped from @math{a}.
8487 Firstly it may be noted that in twos complement the number of low zero bits on
8488 @math{a-b} is the same as @math{b-a}, so counting or testing can begin on
8489 @math{a-b} without waiting for @math{@abs{}(a-b)} to be determined.
8491 A loop stripping low zero bits tends not to branch predict well, since the
8492 condition is data dependent. But on average there's only a few low zeros, so
8493 an option is to strip one or two bits arithmetically then loop for more (as
8494 done for AMD K6). Or use a lookup table to get a count for several bits then
8495 loop for more (as done for AMD K7). An alternative approach is to keep just
8496 one of @math{a} or @math{b} odd and iterate
8499 @math{a,b = @abs{}(a-b), @min{}(a,b)} @*
8500 @math{a = a/2} if even @*
8501 @math{b = b/2} if even
8504 This requires about 1.25 iterations per bit, but stripping of a single bit at
8505 each step avoids any branching. Repeating the bit strip reduces to about 0.9
8506 iterations per bit, which may be a worthwhile tradeoff.
8508 Generally with the above approaches a speed of perhaps 6 cycles per bit can be
8509 achieved, which is still not terribly fast with for instance a 64-bit GCD
8510 taking nearly 400 cycles. It's this sort of time which means it's not usually
8511 advantageous to combine a set of divisibility tests into a GCD.
8513 Currently, the binary algorithm is used for GCD only when @math{N < 3}.
8515 @node Lehmer's Algorithm, Subquadratic GCD, Binary GCD, Greatest Common Divisor Algorithms
8516 @comment node-name, next, previous, up
8517 @subsection Lehmer's algorithm
8519 Lehmer's improvement of the Euclidean algorithms is based on the observation
8520 that the initial part of the quotient sequence depends only on the most
8521 significant parts of the inputs. The variant of Lehmer's algorithm used in GMP
8522 splits off the most significant two limbs, as suggested, e.g., in ``A
8523 Double-Digit Lehmer-Euclid Algorithm'' by Jebelean (@pxref{References}). The
8524 quotients of two double-limb inputs are collected as a 2 by 2 matrix with
8525 single-limb elements. This is done by the function @code{mpn_hgcd2}. The
8526 resulting matrix is applied to the inputs using @code{mpn_mul_1} and
8527 @code{mpn_submul_1}. Each iteration usually reduces the inputs by almost one
8528 limb. In the rare case of a large quotient, no progress can be made by
8529 examining just the most significant two limbs, and the quotient is computing
8530 using plain division.
8532 The resulting algorithm is asymptotically @math{O(N^2)}, just as the Euclidean
8533 algorithm and the binary algorithm. The quadratic part of the work are
8534 the calls to @code{mpn_mul_1} and @code{mpn_submul_1}. For small sizes, the
8535 linear work is also significant. There are roughly @math{N} calls to the
8536 @code{mpn_hgcd2} function. This function uses a couple of important
8541 It uses the same relaxed notion of correctness as @code{mpn_hgcd} (see next
8542 section). This means that when called with the most significant two limbs of
8543 two large numbers, the returned matrix does not always correspond exactly to
8544 the initial quotient sequence for the two large numbers; the final quotient
8545 may sometimes be one off.
8548 It takes advantage of the fact the quotients are usually small. The division
8549 operator is not used, since the corresponding assembler instruction is very
8550 slow on most architectures. (This code could probably be improved further, it
8551 uses many branches that are unfriendly to prediction).
8554 It switches from double-limb calculations to single-limb calculations half-way
8555 through, when the input numbers have been reduced in size from two limbs to
8560 @node Subquadratic GCD, Extended GCD, Lehmer's Algorithm, Greatest Common Divisor Algorithms
8561 @subsection Subquadratic GCD
8563 For inputs larger than @code{GCD_DC_THRESHOLD}, GCD is computed via the HGCD
8564 (Half GCD) function, as a generalization to Lehmer's algorithm.
8566 Let the inputs @math{a,b} be of size @math{N} limbs each. Put @m{S=\lfloor N/2
8567 \rfloor + 1, S = floor(N/2) + 1}. Then HGCD(a,b) returns a transformation
8568 matrix @math{T} with non-negative elements, and reduced numbers @math{(c;d) =
8569 T^{-1} (a;b)}. The reduced numbers @math{c,d} must be larger than @math{S}
8570 limbs, while their difference @math{abs(c-d)} must fit in @math{S} limbs. The
8571 matrix elements will also be of size roughly @math{N/2}.
8573 The HGCD base case uses Lehmer's algorithm, but with the above stop condition
8574 that returns reduced numbers and the corresponding transformation matrix
8575 half-way through. For inputs larger than @code{HGCD_THRESHOLD}, HGCD is
8576 computed recursively, using the divide and conquer algorithm in ``On
8577 Sch@"onhage's algorithm and subquadratic integer GCD computation'' by M@"oller
8578 (@pxref{References}). The recursive algorithm consists of these main
8584 Call HGCD recursively, on the most significant @math{N/2} limbs. Apply the
8585 resulting matrix @math{T_1} to the full numbers, reducing them to a size just
8589 Perform a small number of division or subtraction steps to reduce the numbers
8590 to size below @math{3N/2}. This is essential mainly for the unlikely case of
8594 Call HGCD recursively, on the most significant @math{N/2} limbs of the reduced
8595 numbers. Apply the resulting matrix @math{T_2} to the full numbers, reducing
8596 them to a size just above @math{N/2}.
8599 Compute @math{T = T_1 T_2}.
8602 Perform a small number of division and subtraction steps to satisfy the
8603 requirements, and return.
8606 GCD is then implemented as a loop around HGCD, similarly to Lehmer's
8607 algorithm. Where Lehmer repeatedly chops off the top two limbs, calls
8608 @code{mpn_hgcd2}, and applies the resulting matrix to the full numbers, the
8609 subquadratic GCD chops off the most significant third of the limbs (the
8610 proportion is a tuning parameter, and @math{1/3} seems to be more efficient
8611 than, e.g, @math{1/2}), calls @code{mpn_hgcd}, and applies the resulting
8612 matrix. Once the input numbers are reduced to size below
8613 @code{GCD_DC_THRESHOLD}, Lehmer's algorithm is used for the rest of the work.
8615 The asymptotic running time of both HGCD and GCD is @m{O(M(N)\log N),O(M(N)*log(N))},
8616 where @math{M(N)} is the time for multiplying two @math{N}-limb numbers.
8618 @comment node-name, next, previous, up
8620 @node Extended GCD, Jacobi Symbol, Subquadratic GCD, Greatest Common Divisor Algorithms
8621 @subsection Extended GCD
8623 The extended GCD function, or GCDEXT, calculates @math{@gcd{}(a,b)} and also
8624 cofactors @math{x} and @math{y} satisfying @m{ax+by=\gcd(a@C{}b),
8625 a*x+b*y=gcd(a@C{}b)}. All the algorithms used for plain GCD are extended to
8626 handle this case. The binary algorithm is used only for single-limb GCDEXT.
8627 Lehmer's algorithm is used for sizes up to @code{GCDEXT_DC_THRESHOLD}. Above
8628 this threshold, GCDEXT is implemented as a loop around HGCD, but with more
8629 book-keeping to keep track of the cofactors. This gives the same asymptotic
8630 running time as for GCD and HGCD, @m{O(M(N)\log N),O(M(N)*log(N))}
8632 One difference to plain GCD is that while the inputs @math{a} and @math{b} are
8633 reduced as the algorithm proceeds, the cofactors @math{x} and @math{y} grow in
8634 size. This makes the tuning of the chopping-point more difficult. The current
8635 code chops off the most significant half of the inputs for the call to HGCD in
8636 the first iteration, and the most significant two thirds for the remaining
8637 calls. This strategy could surely be improved. Also the stop condition for the
8638 loop, where Lehmer's algorithm is invoked once the inputs are reduced below
8639 @code{GCDEXT_DC_THRESHOLD}, could maybe be improved by taking into account the
8640 current size of the cofactors.
8642 @node Jacobi Symbol, , Extended GCD, Greatest Common Divisor Algorithms
8643 @subsection Jacobi Symbol
8644 @cindex Jacobi symbol algorithm
8646 @code{mpz_jacobi} and @code{mpz_kronecker} are currently implemented with a
8647 simple binary algorithm similar to that described for the GCDs (@pxref{Binary
8648 GCD}). They're not very fast when both inputs are large. Lehmer's multi-step
8649 improvement or a binary based multi-step algorithm is likely to be better.
8651 When one operand fits a single limb, and that includes @code{mpz_kronecker_ui}
8652 and friends, an initial reduction is done with either @code{mpn_mod_1} or
8653 @code{mpn_modexact_1_odd}, followed by the binary algorithm on a single limb.
8654 The binary algorithm is well suited to a single limb, and the whole
8655 calculation in this case is quite efficient.
8657 In all the routines sign changes for the result are accumulated using some bit
8658 twiddling, avoiding table lookups or conditional jumps.
8662 @node Powering Algorithms, Root Extraction Algorithms, Greatest Common Divisor Algorithms, Algorithms
8663 @section Powering Algorithms
8664 @cindex Powering algorithms
8667 * Normal Powering Algorithm::
8668 * Modular Powering Algorithm::
8672 @node Normal Powering Algorithm, Modular Powering Algorithm, Powering Algorithms, Powering Algorithms
8673 @subsection Normal Powering
8675 Normal @code{mpz} or @code{mpf} powering uses a simple binary algorithm,
8676 successively squaring and then multiplying by the base when a 1 bit is seen in
8677 the exponent, as per Knuth section 4.6.3. The ``left to right''
8678 variant described there is used rather than algorithm A, since it's just as
8679 easy and can be done with somewhat less temporary memory.
8682 @node Modular Powering Algorithm, , Normal Powering Algorithm, Powering Algorithms
8683 @subsection Modular Powering
8685 Modular powering is implemented using a @math{2^k}-ary sliding window
8686 algorithm, as per ``Handbook of Applied Cryptography'' algorithm 14.85
8687 (@pxref{References}). @math{k} is chosen according to the size of the
8688 exponent. Larger exponents use larger values of @math{k}, the choice being
8689 made to minimize the average number of multiplications that must supplement
8692 The modular multiplies and squares use either a simple division or the REDC
8693 method by Montgomery (@pxref{References}). REDC is a little faster,
8694 essentially saving N single limb divisions in a fashion similar to an exact
8695 remainder (@pxref{Exact Remainder}).
8698 @node Root Extraction Algorithms, Radix Conversion Algorithms, Powering Algorithms, Algorithms
8699 @section Root Extraction Algorithms
8700 @cindex Root extraction algorithms
8703 * Square Root Algorithm::
8704 * Nth Root Algorithm::
8705 * Perfect Square Algorithm::
8706 * Perfect Power Algorithm::
8710 @node Square Root Algorithm, Nth Root Algorithm, Root Extraction Algorithms, Root Extraction Algorithms
8711 @subsection Square Root
8712 @cindex Square root algorithm
8713 @cindex Karatsuba square root algorithm
8715 Square roots are taken using the ``Karatsuba Square Root'' algorithm by Paul
8716 Zimmermann (@pxref{References}).
8718 An input @math{n} is split into four parts of @math{k} bits each, so with
8719 @math{b=2^k} we have @m{n = a_3b^3 + a_2b^2 + a_1b + a_0, n = a3*b^3 + a2*b^2
8720 + a1*b + a0}. Part @ms{a,3} must be ``normalized'' so that either the high or
8721 second highest bit is set. In GMP, @math{k} is kept on a limb boundary and
8722 the input is left shifted (by an even number of bits) to normalize.
8724 The square root of the high two parts is taken, by recursive application of
8725 the algorithm (bottoming out in a one-limb Newton's method),
8727 $$ s',r' = \mathop{\rm sqrtrem} \> (a_3b + a_2) $$
8732 s1,r1 = sqrtrem (a3*b + a2)
8736 This is an approximation to the desired root and is extended by a division to
8737 give @math{s},@math{r},
8740 q,u &= \mathop{\rm divrem} \> (r'b + a_1, 2s') \cr
8748 q,u = divrem (r1*b + a1, 2*s1)
8754 The normalization requirement on @ms{a,3} means at this point @math{s} is
8755 either correct or 1 too big. @math{r} is negative in the latter case, so
8758 \mathop{\rm if} \; r &< 0 \; \mathop{\rm then} \cr
8759 r &\leftarrow r + 2s - 1 \cr
8772 The algorithm is expressed in a divide and conquer form, but as noted in the
8773 paper it can also be viewed as a discrete variant of Newton's method, or as a
8774 variation on the schoolboy method (no longer taught) for square roots two
8777 If the remainder @math{r} is not required then usually only a few high limbs
8778 of @math{r} and @math{u} need to be calculated to determine whether an
8779 adjustment to @math{s} is required. This optimization is not currently
8782 In the Karatsuba multiplication range this algorithm is @m{O({3\over2}
8783 M(N/2)),O(1.5*M(N/2))}, where @math{M(n)} is the time to multiply two numbers
8784 of @math{n} limbs. In the FFT multiplication range this grows to a bound of
8785 @m{O(6 M(N/2)),O(6*M(N/2))}. In practice a factor of about 1.5 to 1.8 is
8786 found in the Karatsuba and Toom-3 ranges, growing to 2 or 3 in the FFT range.
8788 The algorithm does all its calculations in integers and the resulting
8789 @code{mpn_sqrtrem} is used for both @code{mpz_sqrt} and @code{mpf_sqrt}.
8790 The extended precision given by @code{mpf_sqrt_ui} is obtained by
8791 padding with zero limbs.
8794 @node Nth Root Algorithm, Perfect Square Algorithm, Square Root Algorithm, Root Extraction Algorithms
8795 @subsection Nth Root
8796 @cindex Root extraction algorithm
8797 @cindex Nth root algorithm
8799 Integer Nth roots are taken using Newton's method with the following
8800 iteration, where @math{A} is the input and @math{n} is the root to be taken.
8802 $$a_{i+1} = {1\over n} \left({A \over a_i^{n-1}} + (n-1)a_i \right)$$
8808 a[i+1] = - * ( --------- + (n-1)*a[i] )
8813 The initial approximation @m{a_1,a[1]} is generated bitwise by successively
8814 powering a trial root with or without new 1 bits, aiming to be just above the
8815 true root. The iteration converges quadratically when started from a good
8816 approximation. When @math{n} is large more initial bits are needed to get
8817 good convergence. The current implementation is not particularly well
8821 @node Perfect Square Algorithm, Perfect Power Algorithm, Nth Root Algorithm, Root Extraction Algorithms
8822 @subsection Perfect Square
8823 @cindex Perfect square algorithm
8825 A significant fraction of non-squares can be quickly identified by checking
8826 whether the input is a quadratic residue modulo small integers.
8828 @code{mpz_perfect_square_p} first tests the input mod 256, which means just
8829 examining the low byte. Only 44 different values occur for squares mod 256,
8830 so 82.8% of inputs can be immediately identified as non-squares.
8832 On a 32-bit system similar tests are done mod 9, 5, 7, 13 and 17, for a total
8833 99.25% of inputs identified as non-squares. On a 64-bit system 97 is tested
8834 too, for a total 99.62%.
8836 These moduli are chosen because they're factors of @math{2^@W{24}-1} (or
8837 @math{2^@W{48}-1} for 64-bits), and such a remainder can be quickly taken just
8838 using additions (see @code{mpn_mod_34lsub1}).
8840 When nails are in use moduli are instead selected by the @file{gen-psqr.c}
8841 program and applied with an @code{mpn_mod_1}. The same @math{2^@W{24}-1} or
8842 @math{2^@W{48}-1} could be done with nails using some extra bit shifts, but
8843 this is not currently implemented.
8845 In any case each modulus is applied to the @code{mpn_mod_34lsub1} or
8846 @code{mpn_mod_1} remainder and a table lookup identifies non-squares. By
8847 using a ``modexact'' style calculation, and suitably permuted tables, just one
8848 multiply each is required, see the code for details. Moduli are also combined
8849 to save operations, so long as the lookup tables don't become too big.
8850 @file{gen-psqr.c} does all the pre-calculations.
8852 A square root must still be taken for any value that passes these tests, to
8853 verify it's really a square and not one of the small fraction of non-squares
8854 that get through (ie.@: a pseudo-square to all the tested bases).
8856 Clearly more residue tests could be done, @code{mpz_perfect_square_p} only
8857 uses a compact and efficient set. Big inputs would probably benefit from more
8858 residue testing, small inputs might be better off with less. The assumed
8859 distribution of squares versus non-squares in the input would affect such
8863 @node Perfect Power Algorithm, , Perfect Square Algorithm, Root Extraction Algorithms
8864 @subsection Perfect Power
8865 @cindex Perfect power algorithm
8867 Detecting perfect powers is required by some factorization algorithms.
8868 Currently @code{mpz_perfect_power_p} is implemented using repeated Nth root
8869 extractions, though naturally only prime roots need to be considered.
8870 (@xref{Nth Root Algorithm}.)
8872 If a prime divisor @math{p} with multiplicity @math{e} can be found, then only
8873 roots which are divisors of @math{e} need to be considered, much reducing the
8874 work necessary. To this end divisibility by a set of small primes is checked.
8877 @node Radix Conversion Algorithms, Other Algorithms, Root Extraction Algorithms, Algorithms
8878 @section Radix Conversion
8879 @cindex Radix conversion algorithms
8881 Radix conversions are less important than other algorithms. A program
8882 dominated by conversions should probably use a different data representation.
8890 @node Binary to Radix, Radix to Binary, Radix Conversion Algorithms, Radix Conversion Algorithms
8891 @subsection Binary to Radix
8893 Conversions from binary to a power-of-2 radix use a simple and fast
8894 @math{O(N)} bit extraction algorithm.
8896 Conversions from binary to other radices use one of two algorithms. Sizes
8897 below @code{GET_STR_PRECOMPUTE_THRESHOLD} use a basic @math{O(N^2)} method.
8898 Repeated divisions by @math{b^n} are made, where @math{b} is the radix and
8899 @math{n} is the biggest power that fits in a limb. But instead of simply
8900 using the remainder @math{r} from such divisions, an extra divide step is done
8901 to give a fractional limb representing @math{r/b^n}. The digits of @math{r}
8902 can then be extracted using multiplications by @math{b} rather than divisions.
8903 Special case code is provided for decimal, allowing multiplications by 10 to
8904 optimize to shifts and adds.
8906 Above @code{GET_STR_PRECOMPUTE_THRESHOLD} a sub-quadratic algorithm is used.
8907 For an input @math{t}, powers @m{b^{n2^i},b^(n*2^i)} of the radix are
8908 calculated, until a power between @math{t} and @m{\sqrt{t},sqrt(t)} is
8909 reached. @math{t} is then divided by that largest power, giving a quotient
8910 which is the digits above that power, and a remainder which is those below.
8911 These two parts are in turn divided by the second highest power, and so on
8912 recursively. When a piece has been divided down to less than
8913 @code{GET_STR_DC_THRESHOLD} limbs, the basecase algorithm described above is
8916 The advantage of this algorithm is that big divisions can make use of the
8917 sub-quadratic divide and conquer division (@pxref{Divide and Conquer
8918 Division}), and big divisions tend to have less overheads than lots of
8919 separate single limb divisions anyway. But in any case the cost of
8920 calculating the powers @m{b^{n2^i},b^(n*2^i)} must first be overcome.
8922 @code{GET_STR_PRECOMPUTE_THRESHOLD} and @code{GET_STR_DC_THRESHOLD} represent
8923 the same basic thing, the point where it becomes worth doing a big division to
8924 cut the input in half. @code{GET_STR_PRECOMPUTE_THRESHOLD} includes the cost
8925 of calculating the radix power required, whereas @code{GET_STR_DC_THRESHOLD}
8926 assumes that's already available, which is the case when recursing.
8928 Since the base case produces digits from least to most significant but they
8929 want to be stored from most to least, it's necessary to calculate in advance
8930 how many digits there will be, or at least be sure not to underestimate that.
8931 For GMP the number of input bits is multiplied by @code{chars_per_bit_exactly}
8932 from @code{mp_bases}, rounding up. The result is either correct or one too
8935 Examining some of the high bits of the input could increase the chance of
8936 getting the exact number of digits, but an exact result every time would not
8937 be practical, since in general the difference between numbers 100@dots{} and
8938 99@dots{} is only in the last few bits and the work to identify 99@dots{}
8939 might well be almost as much as a full conversion.
8941 @code{mpf_get_str} doesn't currently use the algorithm described here, it
8942 multiplies or divides by a power of @math{b} to move the radix point to the
8943 just above the highest non-zero digit (or at worst one above that location),
8944 then multiplies by @math{b^n} to bring out digits. This is @math{O(N^2)} and
8945 is certainly not optimal.
8947 The @math{r/b^n} scheme described above for using multiplications to bring out
8948 digits might be useful for more than a single limb. Some brief experiments
8949 with it on the base case when recursing didn't give a noticeable improvement,
8950 but perhaps that was only due to the implementation. Something similar would
8951 work for the sub-quadratic divisions too, though there would be the cost of
8952 calculating a bigger radix power.
8954 Another possible improvement for the sub-quadratic part would be to arrange
8955 for radix powers that balanced the sizes of quotient and remainder produced,
8956 ie.@: the highest power would be an @m{b^{nk},b^(n*k)} approximately equal to
8957 @m{\sqrt{t},sqrt(t)}, not restricted to a @math{2^i} factor. That ought to
8958 smooth out a graph of times against sizes, but may or may not be a net
8962 @node Radix to Binary, , Binary to Radix, Radix Conversion Algorithms
8963 @subsection Radix to Binary
8965 @strong{This section needs to be rewritten, it currently describes the
8966 algorithms used before GMP 4.3.}
8968 Conversions from a power-of-2 radix into binary use a simple and fast
8969 @math{O(N)} bitwise concatenation algorithm.
8971 Conversions from other radices use one of two algorithms. Sizes below
8972 @code{SET_STR_PRECOMPUTE_THRESHOLD} use a basic @math{O(N^2)} method. Groups
8973 of @math{n} digits are converted to limbs, where @math{n} is the biggest
8974 power of the base @math{b} which will fit in a limb, then those groups are
8975 accumulated into the result by multiplying by @math{b^n} and adding. This
8976 saves multi-precision operations, as per Knuth section 4.4 part E
8977 (@pxref{References}). Some special case code is provided for decimal, giving
8978 the compiler a chance to optimize multiplications by 10.
8980 Above @code{SET_STR_PRECOMPUTE_THRESHOLD} a sub-quadratic algorithm is used.
8981 First groups of @math{n} digits are converted into limbs. Then adjacent
8982 limbs are combined into limb pairs with @m{xb^n+y,x*b^n+y}, where @math{x}
8983 and @math{y} are the limbs. Adjacent limb pairs are combined into quads
8984 similarly with @m{xb^{2n}+y,x*b^(2n)+y}. This continues until a single block
8985 remains, that being the result.
8987 The advantage of this method is that the multiplications for each @math{x} are
8988 big blocks, allowing Karatsuba and higher algorithms to be used. But the cost
8989 of calculating the powers @m{b^{n2^i},b^(n*2^i)} must be overcome.
8990 @code{SET_STR_PRECOMPUTE_THRESHOLD} usually ends up quite big, around 5000 digits, and on
8991 some processors much bigger still.
8993 @code{SET_STR_PRECOMPUTE_THRESHOLD} is based on the input digits (and tuned
8994 for decimal), though it might be better based on a limb count, so as to be
8995 independent of the base. But that sort of count isn't used by the base case
8996 and so would need some sort of initial calculation or estimate.
8998 The main reason @code{SET_STR_PRECOMPUTE_THRESHOLD} is so much bigger than the
8999 corresponding @code{GET_STR_PRECOMPUTE_THRESHOLD} is that @code{mpn_mul_1} is
9000 much faster than @code{mpn_divrem_1} (often by a factor of 5, or more).
9004 @node Other Algorithms, Assembly Coding, Radix Conversion Algorithms, Algorithms
9005 @section Other Algorithms
9008 * Prime Testing Algorithm::
9009 * Factorial Algorithm::
9010 * Binomial Coefficients Algorithm::
9011 * Fibonacci Numbers Algorithm::
9012 * Lucas Numbers Algorithm::
9013 * Random Number Algorithms::
9017 @node Prime Testing Algorithm, Factorial Algorithm, Other Algorithms, Other Algorithms
9018 @subsection Prime Testing
9019 @cindex Prime testing algorithms
9021 The primality testing in @code{mpz_probab_prime_p} (@pxref{Number Theoretic
9022 Functions}) first does some trial division by small factors and then uses the
9023 Miller-Rabin probabilistic primality testing algorithm, as described in Knuth
9024 section 4.5.4 algorithm P (@pxref{References}).
9026 For an odd input @math{n}, and with @math{n = q@GMPmultiply{}2^k+1} where
9027 @math{q} is odd, this algorithm selects a random base @math{x} and tests
9028 whether @math{x^q @bmod{} n} is 1 or @math{-1}, or an @m{x^{q2^j} \bmod n,
9029 x^(q*2^j) mod n} is @math{1}, for @math{1@le{}j@le{}k}. If so then @math{n}
9030 is probably prime, if not then @math{n} is definitely composite.
9032 Any prime @math{n} will pass the test, but some composites do too. Such
9033 composites are known as strong pseudoprimes to base @math{x}. No @math{n} is
9034 a strong pseudoprime to more than @math{1/4} of all bases (see Knuth exercise
9035 22), hence with @math{x} chosen at random there's no more than a @math{1/4}
9036 chance a ``probable prime'' will in fact be composite.
9038 In fact strong pseudoprimes are quite rare, making the test much more
9039 powerful than this analysis would suggest, but @math{1/4} is all that's proven
9040 for an arbitrary @math{n}.
9043 @node Factorial Algorithm, Binomial Coefficients Algorithm, Prime Testing Algorithm, Other Algorithms
9044 @subsection Factorial
9045 @cindex Factorial algorithm
9047 Factorials are calculated by a combination of removal of twos, powering, and
9048 binary splitting. The procedure can be best illustrated with an example,
9051 @math{23! = 1.2.3.4.5.6.7.8.9.10.11.12.13.14.15.16.17.18.19.20.21.22.23}
9055 has factors of two removed,
9058 @math{23! = 2^{19}.1.1.3.1.5.3.7.1.9.5.11.3.13.7.15.1.17.9.19.5.21.11.23}
9062 and the resulting terms collected up according to their multiplicity,
9065 @math{23! = 2^{19}.(3.5)^3.(7.9.11)^2.(13.15.17.19.21.23)}
9068 Each sequence such as @math{13.15.17.19.21.23} is evaluated by splitting into
9069 every second term, as for instance @math{(13.17.21).(15.19.23)}, and the same
9070 recursively on each half. This is implemented iteratively using some bit
9073 Such splitting is more efficient than repeated N@cross{}1 multiplies since it
9074 forms big multiplies, allowing Karatsuba and higher algorithms to be used.
9075 And even below the Karatsuba threshold a big block of work can be more
9076 efficient for the basecase algorithm.
9078 Splitting into subsequences of every second term keeps the resulting products
9079 more nearly equal in size than would the simpler approach of say taking the
9080 first half and second half of the sequence. Nearly equal products are more
9081 efficient for the current multiply implementation.
9084 @node Binomial Coefficients Algorithm, Fibonacci Numbers Algorithm, Factorial Algorithm, Other Algorithms
9085 @subsection Binomial Coefficients
9086 @cindex Binomial coefficient algorithm
9088 Binomial coefficients @m{\left({n}\atop{k}\right), C(n@C{}k)} are calculated
9089 by first arranging @math{k @le{} n/2} using @m{\left({n}\atop{k}\right) =
9090 \left({n}\atop{n-k}\right), C(n@C{}k) = C(n@C{}n-k)} if necessary, and then
9091 evaluating the following product simply from @math{i=2} to @math{i=k}.
9093 $$ \left({n}\atop{k}\right) = (n-k+1) \prod_{i=2}^{k} {{n-k+i} \over i} $$
9099 C(n,k) = (n-k+1) * prod -------
9104 It's easy to show that each denominator @math{i} will divide the product so
9105 far, so the exact division algorithm is used (@pxref{Exact Division}).
9107 The numerators @math{n-k+i} and denominators @math{i} are first accumulated
9108 into as many fit a limb, to save multi-precision operations, though for
9109 @code{mpz_bin_ui} this applies only to the divisors, since @math{n} is an
9110 @code{mpz_t} and @math{n-k+i} in general won't fit in a limb at all.
9113 @node Fibonacci Numbers Algorithm, Lucas Numbers Algorithm, Binomial Coefficients Algorithm, Other Algorithms
9114 @subsection Fibonacci Numbers
9115 @cindex Fibonacci number algorithm
9117 The Fibonacci functions @code{mpz_fib_ui} and @code{mpz_fib2_ui} are designed
9118 for calculating isolated @m{F_n,F[n]} or @m{F_n,F[n]},@m{F_{n-1},F[n-1]}
9121 For small @math{n}, a table of single limb values in @code{__gmp_fib_table} is
9122 used. On a 32-bit limb this goes up to @m{F_{47},F[47]}, or on a 64-bit limb
9123 up to @m{F_{93},F[93]}. For convenience the table starts at @m{F_{-1},F[-1]}.
9125 Beyond the table, values are generated with a binary powering algorithm,
9126 calculating a pair @m{F_n,F[n]} and @m{F_{n-1},F[n-1]} working from high to
9127 low across the bits of @math{n}. The formulas used are
9130 F_{2k+1} &= 4F_k^2 - F_{k-1}^2 + 2(-1)^k \cr
9131 F_{2k-1} &= F_k^2 + F_{k-1}^2 \cr
9132 F_{2k} &= F_{2k+1} - F_{2k-1}
9138 F[2k+1] = 4*F[k]^2 - F[k-1]^2 + 2*(-1)^k
9139 F[2k-1] = F[k]^2 + F[k-1]^2
9141 F[2k] = F[2k+1] - F[2k-1]
9145 At each step, @math{k} is the high @math{b} bits of @math{n}. If the next bit
9146 of @math{n} is 0 then @m{F_{2k},F[2k]},@m{F_{2k-1},F[2k-1]} is used, or if
9147 it's a 1 then @m{F_{2k+1},F[2k+1]},@m{F_{2k},F[2k]} is used, and the process
9148 repeated until all bits of @math{n} are incorporated. Notice these formulas
9149 require just two squares per bit of @math{n}.
9151 It'd be possible to handle the first few @math{n} above the single limb table
9152 with simple additions, using the defining Fibonacci recurrence @m{F_{k+1} =
9153 F_k + F_{k-1}, F[k+1]=F[k]+F[k-1]}, but this is not done since it usually
9154 turns out to be faster for only about 10 or 20 values of @math{n}, and
9155 including a block of code for just those doesn't seem worthwhile. If they
9156 really mattered it'd be better to extend the data table.
9158 Using a table avoids lots of calculations on small numbers, and makes small
9159 @math{n} go fast. A bigger table would make more small @math{n} go fast, it's
9160 just a question of balancing size against desired speed. For GMP the code is
9161 kept compact, with the emphasis primarily on a good powering algorithm.
9163 @code{mpz_fib2_ui} returns both @m{F_n,F[n]} and @m{F_{n-1},F[n-1]}, but
9164 @code{mpz_fib_ui} is only interested in @m{F_n,F[n]}. In this case the last
9165 step of the algorithm can become one multiply instead of two squares. One of
9166 the following two formulas is used, according as @math{n} is odd or even.
9169 F_{2k} &= F_k (F_k + 2F_{k-1}) \cr
9170 F_{2k+1} &= (2F_k + F_{k-1}) (2F_k - F_{k-1}) + 2(-1)^k
9176 F[2k] = F[k]*(F[k]+2F[k-1])
9178 F[2k+1] = (2F[k]+F[k-1])*(2F[k]-F[k-1]) + 2*(-1)^k
9182 @m{F_{2k+1},F[2k+1]} here is the same as above, just rearranged to be a
9183 multiply. For interest, the @m{2(-1)^k, 2*(-1)^k} term both here and above
9184 can be applied just to the low limb of the calculation, without a carry or
9185 borrow into further limbs, which saves some code size. See comments with
9186 @code{mpz_fib_ui} and the internal @code{mpn_fib2_ui} for how this is done.
9189 @node Lucas Numbers Algorithm, Random Number Algorithms, Fibonacci Numbers Algorithm, Other Algorithms
9190 @subsection Lucas Numbers
9191 @cindex Lucas number algorithm
9193 @code{mpz_lucnum2_ui} derives a pair of Lucas numbers from a pair of Fibonacci
9194 numbers with the following simple formulas.
9197 L_k &= F_k + 2F_{k-1} \cr
9198 L_{k-1} &= 2F_k - F_{k-1}
9204 L[k] = F[k] + 2*F[k-1]
9205 L[k-1] = 2*F[k] - F[k-1]
9209 @code{mpz_lucnum_ui} is only interested in @m{L_n,L[n]}, and some work can be
9210 saved. Trailing zero bits on @math{n} can be handled with a single square
9213 $$ L_{2k} = L_k^2 - 2(-1)^k $$
9218 L[2k] = L[k]^2 - 2*(-1)^k
9222 And the lowest 1 bit can be handled with one multiply of a pair of Fibonacci
9223 numbers, similar to what @code{mpz_fib_ui} does.
9225 $$ L_{2k+1} = 5F_{k-1} (2F_k + F_{k-1}) - 4(-1)^k $$
9230 L[2k+1] = 5*F[k-1]*(2*F[k]+F[k-1]) - 4*(-1)^k
9236 @node Random Number Algorithms, , Lucas Numbers Algorithm, Other Algorithms
9237 @subsection Random Numbers
9238 @cindex Random number algorithms
9240 For the @code{urandomb} functions, random numbers are generated simply by
9241 concatenating bits produced by the generator. As long as the generator has
9242 good randomness properties this will produce well-distributed @math{N} bit
9245 For the @code{urandomm} functions, random numbers in a range @math{0@le{}R<N}
9246 are generated by taking values @math{R} of @m{\lceil \log_2 N \rceil,
9247 ceil(log2(N))} bits each until one satisfies @math{R<N}. This will normally
9248 require only one or two attempts, but the attempts are limited in case the
9249 generator is somehow degenerate and produces only 1 bits or similar.
9251 @cindex Mersenne twister algorithm
9252 The Mersenne Twister generator is by Matsumoto and Nishimura
9253 (@pxref{References}). It has a non-repeating period of @math{2^@W{19937}-1},
9254 which is a Mersenne prime, hence the name of the generator. The state is 624
9255 words of 32-bits each, which is iterated with one XOR and shift for each
9256 32-bit word generated, making the algorithm very fast. Randomness properties
9257 are also very good and this is the default algorithm used by GMP.
9259 @cindex Linear congruential algorithm
9260 Linear congruential generators are described in many text books, for instance
9261 Knuth volume 2 (@pxref{References}). With a modulus @math{M} and parameters
9262 @math{A} and @math{C}, a integer state @math{S} is iterated by the formula
9263 @math{S @leftarrow{} A@GMPmultiply{}S+C @bmod{} M}. At each step the new
9264 state is a linear function of the previous, mod @math{M}, hence the name of
9267 In GMP only moduli of the form @math{2^N} are supported, and the current
9268 implementation is not as well optimized as it could be. Overheads are
9269 significant when @math{N} is small, and when @math{N} is large clearly the
9270 multiply at each step will become slow. This is not a big concern, since the
9271 Mersenne Twister generator is better in every respect and is therefore
9272 recommended for all normal applications.
9274 For both generators the current state can be deduced by observing enough
9275 output and applying some linear algebra (over GF(2) in the case of the
9276 Mersenne Twister). This generally means raw output is unsuitable for
9277 cryptographic applications without further hashing or the like.
9280 @node Assembly Coding, , Other Algorithms, Algorithms
9281 @section Assembly Coding
9282 @cindex Assembly coding
9284 The assembly subroutines in GMP are the most significant source of speed at
9285 small to moderate sizes. At larger sizes algorithm selection becomes more
9286 important, but of course speedups in low level routines will still speed up
9287 everything proportionally.
9289 Carry handling and widening multiplies that are important for GMP can't be
9290 easily expressed in C@. GCC @code{asm} blocks help a lot and are provided in
9291 @file{longlong.h}, but hand coding low level routines invariably offers a
9292 speedup over generic C by a factor of anything from 2 to 10.
9295 * Assembly Code Organisation::
9297 * Assembly Carry Propagation::
9298 * Assembly Cache Handling::
9299 * Assembly Functional Units::
9300 * Assembly Floating Point::
9301 * Assembly SIMD Instructions::
9302 * Assembly Software Pipelining::
9303 * Assembly Loop Unrolling::
9304 * Assembly Writing Guide::
9308 @node Assembly Code Organisation, Assembly Basics, Assembly Coding, Assembly Coding
9309 @subsection Code Organisation
9310 @cindex Assembly code organisation
9311 @cindex Code organisation
9313 The various @file{mpn} subdirectories contain machine-dependent code, written
9314 in C or assembly. The @file{mpn/generic} subdirectory contains default code,
9315 used when there's no machine-specific version of a particular file.
9317 Each @file{mpn} subdirectory is for an ISA family. Generally 32-bit and
9318 64-bit variants in a family cannot share code and have separate directories.
9319 Within a family further subdirectories may exist for CPU variants.
9321 In each directory a @file{nails} subdirectory may exist, holding code with
9322 nails support for that CPU variant. A @code{NAILS_SUPPORT} directive in each
9323 file indicates the nails values the code handles. Nails code only exists
9324 where it's faster, or promises to be faster, than plain code. There's no
9325 effort put into nails if they're not going to enhance a given CPU.
9328 @node Assembly Basics, Assembly Carry Propagation, Assembly Code Organisation, Assembly Coding
9329 @subsection Assembly Basics
9331 @code{mpn_addmul_1} and @code{mpn_submul_1} are the most important routines
9332 for overall GMP performance. All multiplications and divisions come down to
9333 repeated calls to these. @code{mpn_add_n}, @code{mpn_sub_n},
9334 @code{mpn_lshift} and @code{mpn_rshift} are next most important.
9336 On some CPUs assembly versions of the internal functions
9337 @code{mpn_mul_basecase} and @code{mpn_sqr_basecase} give significant speedups,
9338 mainly through avoiding function call overheads. They can also potentially
9339 make better use of a wide superscalar processor, as can bigger primitives like
9340 @code{mpn_addmul_2} or @code{mpn_addmul_4}.
9342 The restrictions on overlaps between sources and destinations
9343 (@pxref{Low-level Functions}) are designed to facilitate a variety of
9344 implementations. For example, knowing @code{mpn_add_n} won't have partly
9345 overlapping sources and destination means reading can be done far ahead of
9346 writing on superscalar processors, and loops can be vectorized on a vector
9347 processor, depending on the carry handling.
9350 @node Assembly Carry Propagation, Assembly Cache Handling, Assembly Basics, Assembly Coding
9351 @subsection Carry Propagation
9352 @cindex Assembly carry propagation
9354 The problem that presents most challenges in GMP is propagating carries from
9355 one limb to the next. In functions like @code{mpn_addmul_1} and
9356 @code{mpn_add_n}, carries are the only dependencies between limb operations.
9358 On processors with carry flags, a straightforward CISC style @code{adc} is
9359 generally best. AMD K6 @code{mpn_addmul_1} however is an example of an
9360 unusual set of circumstances where a branch works out better.
9362 On RISC processors generally an add and compare for overflow is used. This
9363 sort of thing can be seen in @file{mpn/generic/aors_n.c}. Some carry
9364 propagation schemes require 4 instructions, meaning at least 4 cycles per
9365 limb, but other schemes may use just 1 or 2. On wide superscalar processors
9366 performance may be completely determined by the number of dependent
9367 instructions between carry-in and carry-out for each limb.
9369 On vector processors good use can be made of the fact that a carry bit only
9370 very rarely propagates more than one limb. When adding a single bit to a
9371 limb, there's only a carry out if that limb was @code{0xFF@dots{}FF} which on
9372 random data will be only 1 in @m{2\GMPraise{@code{mp\_bits\_per\_limb}},
9373 2^mp_bits_per_limb}. @file{mpn/cray/add_n.c} is an example of this, it adds
9374 all limbs in parallel, adds one set of carry bits in parallel and then only
9375 rarely needs to fall through to a loop propagating further carries.
9377 On the x86s, GCC (as of version 2.95.2) doesn't generate particularly good code
9378 for the RISC style idioms that are necessary to handle carry bits in
9379 C@. Often conditional jumps are generated where @code{adc} or @code{sbb} forms
9380 would be better. And so unfortunately almost any loop involving carry bits
9381 needs to be coded in assembly for best results.
9384 @node Assembly Cache Handling, Assembly Functional Units, Assembly Carry Propagation, Assembly Coding
9385 @subsection Cache Handling
9386 @cindex Assembly cache handling
9388 GMP aims to perform well both on operands that fit entirely in L1 cache and
9391 Basic routines like @code{mpn_add_n} or @code{mpn_lshift} are often used on
9392 large operands, so L2 and main memory performance is important for them.
9393 @code{mpn_mul_1} and @code{mpn_addmul_1} are mostly used for multiply and
9394 square basecases, so L1 performance matters most for them, unless assembly
9395 versions of @code{mpn_mul_basecase} and @code{mpn_sqr_basecase} exist, in
9396 which case the remaining uses are mostly for larger operands.
9398 For L2 or main memory operands, memory access times will almost certainly be
9399 more than the calculation time. The aim therefore is to maximize memory
9400 throughput, by starting a load of the next cache line while processing the
9401 contents of the previous one. Clearly this is only possible if the chip has a
9402 lock-up free cache or some sort of prefetch instruction. Most current chips
9403 have both these features.
9405 Prefetching sources combines well with loop unrolling, since a prefetch can be
9406 initiated once per unrolled loop (or more than once if the loop covers more
9407 than one cache line).
9409 On CPUs without write-allocate caches, prefetching destinations will ensure
9410 individual stores don't go further down the cache hierarchy, limiting
9411 bandwidth. Of course for calculations which are slow anyway, like
9412 @code{mpn_divrem_1}, write-throughs might be fine.
9414 The distance ahead to prefetch will be determined by memory latency versus
9415 throughput. The aim of course is to have data arriving continuously, at peak
9416 throughput. Some CPUs have limits on the number of fetches or prefetches in
9419 If a special prefetch instruction doesn't exist then a plain load can be used,
9420 but in that case care must be taken not to attempt to read past the end of an
9421 operand, since that might produce a segmentation violation.
9423 Some CPUs or systems have hardware that detects sequential memory accesses and
9424 initiates suitable cache movements automatically, making life easy.
9427 @node Assembly Functional Units, Assembly Floating Point, Assembly Cache Handling, Assembly Coding
9428 @subsection Functional Units
9430 When choosing an approach for an assembly loop, consideration is given to
9431 what operations can execute simultaneously and what throughput can thereby be
9432 achieved. In some cases an algorithm can be tweaked to accommodate available
9435 Loop control will generally require a counter and pointer updates, costing as
9436 much as 5 instructions, plus any delays a branch introduces. CPU addressing
9437 modes might reduce pointer updates, perhaps by allowing just one updating
9438 pointer and others expressed as offsets from it, or on CISC chips with all
9439 addressing done with the loop counter as a scaled index.
9441 The final loop control cost can be amortised by processing several limbs in
9442 each iteration (@pxref{Assembly Loop Unrolling}). This at least ensures loop
9443 control isn't a big fraction the work done.
9445 Memory throughput is always a limit. If perhaps only one load or one store
9446 can be done per cycle then 3 cycles/limb will the top speed for ``binary''
9447 operations like @code{mpn_add_n}, and any code achieving that is optimal.
9449 Integer resources can be freed up by having the loop counter in a float
9450 register, or by pressing the float units into use for some multiplying,
9451 perhaps doing every second limb on the float side (@pxref{Assembly Floating
9454 Float resources can be freed up by doing carry propagation on the integer
9455 side, or even by doing integer to float conversions in integers using bit
9459 @node Assembly Floating Point, Assembly SIMD Instructions, Assembly Functional Units, Assembly Coding
9460 @subsection Floating Point
9461 @cindex Assembly floating Point
9463 Floating point arithmetic is used in GMP for multiplications on CPUs with poor
9464 integer multipliers. It's mostly useful for @code{mpn_mul_1},
9465 @code{mpn_addmul_1} and @code{mpn_submul_1} on 64-bit machines, and
9466 @code{mpn_mul_basecase} on both 32-bit and 64-bit machines.
9468 With IEEE 53-bit double precision floats, integer multiplications producing up
9469 to 53 bits will give exact results. Breaking a 64@cross{}64 multiplication
9470 into eight 16@cross{}@math{32@rightarrow{}48} bit pieces is convenient. With
9471 some care though six 21@cross{}@math{32@rightarrow{}53} bit products can be
9472 used, if one of the lower two 21-bit pieces also uses the sign bit.
9474 For the @code{mpn_mul_1} family of functions on a 64-bit machine, the
9475 invariant single limb is split at the start, into 3 or 4 pieces. Inside the
9476 loop, the bignum operand is split into 32-bit pieces. Fast conversion of
9477 these unsigned 32-bit pieces to floating point is highly machine-dependent.
9478 In some cases, reading the data into the integer unit, zero-extending to
9479 64-bits, then transferring to the floating point unit back via memory is the
9482 Converting partial products back to 64-bit limbs is usually best done as a
9483 signed conversion. Since all values are smaller than @m{2^{53},2^53}, signed
9484 and unsigned are the same, but most processors lack unsigned conversions.
9488 Here is a diagram showing 16@cross{}32 bit products for an @code{mpn_mul_1} or
9489 @code{mpn_addmul_1} with a 64-bit limb. The single limb operand V is split
9490 into four 16-bit parts. The multi-limb operand U is split in the loop into
9494 \global\newdimen\GMPbits \global\GMPbits=0.18em
9497 \hbox to 128\GMPbits{\hfil
9500 \hbox to 48\GMPbits {\GMPvrule \hfil$#2$\hfil \vrule}%
9503 \raise \GMPboxdepth \hbox{\hskip 2em #3}}}
9508 \hbox to 128\GMPbits {\hfil
9511 \hbox to 64\GMPbits{%
9512 \GMPvrule \hfil$v48$\hfil
9513 \vrule \hfil$v32$\hfil
9514 \vrule \hfil$v16$\hfil
9515 \vrule \hfil$v00$\hfil
9518 \raise \GMPboxdepth \hbox{\hskip 2em V Operand}}
9521 \hbox to 128\GMPbits {\hfil
9522 \raise \GMPboxdepth \hbox{$\times$\hskip 1.5em}%
9525 \hbox to 64\GMPbits {%
9526 \GMPvrule \hfil$u32$\hfil
9527 \vrule \hfil$u00$\hfil
9530 \raise \GMPboxdepth \hbox{\hskip 2em U Operand (one limb)}}%
9532 \hbox{\vbox to 2ex{\hrule width 128\GMPbits}}%
9533 \GMPbox{0}{u00 \times v00}{$p00$\hskip 1.5em 48-bit products}%
9535 \GMPbox{16}{u00 \times v16}{$p16$}
9537 \GMPbox{32}{u00 \times v32}{$p32$}
9539 \GMPbox{48}{u00 \times v48}{$p48$}
9541 \GMPbox{32}{u32 \times v00}{$r32$}
9543 \GMPbox{48}{u32 \times v16}{$r48$}
9545 \GMPbox{64}{u32 \times v32}{$r64$}
9547 \GMPbox{80}{u32 \times v48}{$r80$}
9554 |v48|v32|v16|v00| V operand
9558 x | u32 | u00 | U operand (one limb)
9561 ---------------------------------
9564 | u00 x v00 | p00 48-bit products
9591 @math{p32} and @math{r32} can be summed using floating-point addition, and
9592 likewise @math{p48} and @math{r48}. @math{p00} and @math{p16} can be summed
9593 with @math{r64} and @math{r80} from the previous iteration.
9595 For each loop then, four 49-bit quantities are transferred to the integer unit,
9599 % GMPbox here should be 49 bits wide, but use 51 to better show p16+r80'
9600 % crossing into the upper 64 bits.
9603 \hbox to 128\GMPbits {%
9607 \hbox to 51\GMPbits {\GMPvrule \hfil$#2$\hfil \vrule}%
9610 \raise \GMPboxdepth \hbox{\hskip 1.5em $#3$\hfil}%
9612 \newbox\b \setbox\b\hbox{64 bits}%
9613 \newdimen\bw \bw=\wd\b \advance\bw by 2em
9614 \newdimen\x \x=128\GMPbits
9619 \hbox to 128\GMPbits {%
9621 \raise 0.5ex \vbox{\hrule \hbox to \x {}}%
9623 \raise 0.5ex \vbox{\hrule \hbox to \x {}}%
9625 \raise 0.5ex \vbox{\hrule \hbox to \x {}}%
9627 \raise 0.5ex \vbox{\hrule \hbox to \x {}}%
9630 \GMPbox{0}{p00+r64'}{i00}
9632 \GMPbox{16}{p16+r80'}{i16}
9634 \GMPbox{32}{p32+r32}{i32}
9636 \GMPbox{48}{p48+r48}{i48}
9642 |-----64bits----|-----64bits----|
9659 The challenge then is to sum these efficiently and add in a carry limb,
9660 generating a low 64-bit result limb and a high 33-bit carry limb (@math{i48}
9661 extends 33 bits into the high half).
9664 @node Assembly SIMD Instructions, Assembly Software Pipelining, Assembly Floating Point, Assembly Coding
9665 @subsection SIMD Instructions
9666 @cindex Assembly SIMD
9668 The single-instruction multiple-data support in current microprocessors is
9669 aimed at signal processing algorithms where each data point can be treated
9670 more or less independently. There's generally not much support for
9671 propagating the sort of carries that arise in GMP.
9673 SIMD multiplications of say four 16@cross{}16 bit multiplies only do as much
9674 work as one 32@cross{}32 from GMP's point of view, and need some shifts and
9675 adds besides. But of course if say the SIMD form is fully pipelined and uses
9676 less instruction decoding then it may still be worthwhile.
9678 On the x86 chips, MMX has so far found a use in @code{mpn_rshift} and
9679 @code{mpn_lshift}, and is used in a special case for 16-bit multipliers in the
9680 P55 @code{mpn_mul_1}. SSE2 is used for Pentium 4 @code{mpn_mul_1},
9681 @code{mpn_addmul_1}, and @code{mpn_submul_1}.
9684 @node Assembly Software Pipelining, Assembly Loop Unrolling, Assembly SIMD Instructions, Assembly Coding
9685 @subsection Software Pipelining
9686 @cindex Assembly software pipelining
9688 Software pipelining consists of scheduling instructions around the branch
9689 point in a loop. For example a loop might issue a load not for use in the
9690 present iteration but the next, thereby allowing extra cycles for the data to
9693 Naturally this is wanted only when doing things like loads or multiplies that
9694 take several cycles to complete, and only where a CPU has multiple functional
9695 units so that other work can be done in the meantime.
9697 A pipeline with several stages will have a data value in progress at each
9698 stage and each loop iteration moves them along one stage. This is like
9701 If the latency of some instruction is greater than the loop time then it will
9702 be necessary to unroll, so one register has a result ready to use while
9703 another (or multiple others) are still in progress. (@pxref{Assembly Loop
9707 @node Assembly Loop Unrolling, Assembly Writing Guide, Assembly Software Pipelining, Assembly Coding
9708 @subsection Loop Unrolling
9709 @cindex Assembly loop unrolling
9711 Loop unrolling consists of replicating code so that several limbs are
9712 processed in each loop. At a minimum this reduces loop overheads by a
9713 corresponding factor, but it can also allow better register usage, for example
9714 alternately using one register combination and then another. Judicious use of
9715 @command{m4} macros can help avoid lots of duplication in the source code.
9717 Any amount of unrolling can be handled with a loop counter that's decremented
9718 by @math{N} each time, stopping when the remaining count is less than the
9719 further @math{N} the loop will process. Or by subtracting @math{N} at the
9720 start, the termination condition becomes when the counter @math{C} is less
9721 than 0 (and the count of remaining limbs is @math{C+N}).
9723 Alternately for a power of 2 unroll the loop count and remainder can be
9724 established with a shift and mask. This is convenient if also making a
9725 computed jump into the middle of a large loop.
9727 The limbs not a multiple of the unrolling can be handled in various ways, for
9732 A simple loop at the end (or the start) to process the excess. Care will be
9733 wanted that it isn't too much slower than the unrolled part.
9736 A set of binary tests, for example after an 8-limb unrolling, test for 4 more
9737 limbs to process, then a further 2 more or not, and finally 1 more or not.
9738 This will probably take more code space than a simple loop.
9741 A @code{switch} statement, providing separate code for each possible excess,
9742 for example an 8-limb unrolling would have separate code for 0 remaining, 1
9743 remaining, etc, up to 7 remaining. This might take a lot of code, but may be
9744 the best way to optimize all cases in combination with a deep pipelined loop.
9747 A computed jump into the middle of the loop, thus making the first iteration
9748 handle the excess. This should make times smoothly increase with size, which
9749 is attractive, but setups for the jump and adjustments for pointers can be
9750 tricky and could become quite difficult in combination with deep pipelining.
9754 @node Assembly Writing Guide, , Assembly Loop Unrolling, Assembly Coding
9755 @subsection Writing Guide
9756 @cindex Assembly writing guide
9758 This is a guide to writing software pipelined loops for processing limb
9759 vectors in assembly.
9761 First determine the algorithm and which instructions are needed. Code it
9762 without unrolling or scheduling, to make sure it works. On a 3-operand CPU
9763 try to write each new value to a new register, this will greatly simplify later
9766 Then note for each instruction the functional unit and/or issue port
9767 requirements. If an instruction can use either of two units, like U0 or U1
9768 then make a category ``U0/U1''. Count the total using each unit (or combined
9769 unit), and count all instructions.
9771 Figure out from those counts the best possible loop time. The goal will be to
9772 find a perfect schedule where instruction latencies are completely hidden.
9773 The total instruction count might be the limiting factor, or perhaps a
9774 particular functional unit. It might be possible to tweak the instructions to
9775 help the limiting factor.
9777 Suppose the loop time is @math{N}, then make @math{N} issue buckets, with the
9778 final loop branch at the end of the last. Now fill the buckets with dummy
9779 instructions using the functional units desired. Run this to make sure the
9780 intended speed is reached.
9782 Now replace the dummy instructions with the real instructions from the slow
9783 but correct loop you started with. The first will typically be a load
9784 instruction. Then the instruction using that value is placed in a bucket an
9785 appropriate distance down. Run the loop again, to check it still runs at
9788 Keep placing instructions, frequently measuring the loop. After a few you
9789 will need to wrap around from the last bucket back to the top of the loop. If
9790 you used the new-register for new-value strategy above then there will be no
9791 register conflicts. If not then take care not to clobber something already in
9792 use. Changing registers at this time is very error prone.
9794 The loop will overlap two or more of the original loop iterations, and the
9795 computation of one vector element result will be started in one iteration of
9796 the new loop, and completed one or several iterations later.
9798 The final step is to create feed-in and wind-down code for the loop. A good
9799 way to do this is to make a copy (or copies) of the loop at the start and
9800 delete those instructions which don't have valid antecedents, and at the end
9801 replicate and delete those whose results are unwanted (including any further
9804 The loop will have a minimum number of limbs loaded and processed, so the
9805 feed-in code must test if the request size is smaller and skip either to a
9806 suitable part of the wind-down or to special code for small sizes.
9809 @node Internals, Contributors, Algorithms, Top
9813 @strong{This chapter is provided only for informational purposes and the
9814 various internals described here may change in future GMP releases.
9815 Applications expecting to be compatible with future releases should use only
9816 the documented interfaces described in previous chapters.}
9819 * Integer Internals::
9820 * Rational Internals::
9822 * Raw Output Internals::
9823 * C++ Interface Internals::
9826 @node Integer Internals, Rational Internals, Internals, Internals
9827 @section Integer Internals
9828 @cindex Integer internals
9830 @code{mpz_t} variables represent integers using sign and magnitude, in space
9831 dynamically allocated and reallocated. The fields are as follows.
9834 @item @code{_mp_size}
9835 The number of limbs, or the negative of that when representing a negative
9836 integer. Zero is represented by @code{_mp_size} set to zero, in which case
9837 the @code{_mp_d} data is unused.
9840 A pointer to an array of limbs which is the magnitude. These are stored
9841 ``little endian'' as per the @code{mpn} functions, so @code{_mp_d[0]} is the
9842 least significant limb and @code{_mp_d[ABS(_mp_size)-1]} is the most
9843 significant. Whenever @code{_mp_size} is non-zero, the most significant limb
9846 Currently there's always at least one limb allocated, so for instance
9847 @code{mpz_set_ui} never needs to reallocate, and @code{mpz_get_ui} can fetch
9848 @code{_mp_d[0]} unconditionally (though its value is then only wanted if
9849 @code{_mp_size} is non-zero).
9851 @item @code{_mp_alloc}
9852 @code{_mp_alloc} is the number of limbs currently allocated at @code{_mp_d},
9853 and naturally @code{_mp_alloc >= ABS(_mp_size)}. When an @code{mpz} routine
9854 is about to (or might be about to) increase @code{_mp_size}, it checks
9855 @code{_mp_alloc} to see whether there's enough space, and reallocates if not.
9856 @code{MPZ_REALLOC} is generally used for this.
9859 The various bitwise logical functions like @code{mpz_and} behave as if
9860 negative values were twos complement. But sign and magnitude is always used
9861 internally, and necessary adjustments are made during the calculations.
9862 Sometimes this isn't pretty, but sign and magnitude are best for other
9865 Some internal temporary variables are setup with @code{MPZ_TMP_INIT} and these
9866 have @code{_mp_d} space obtained from @code{TMP_ALLOC} rather than the memory
9867 allocation functions. Care is taken to ensure that these are big enough that
9868 no reallocation is necessary (since it would have unpredictable consequences).
9870 @code{_mp_size} and @code{_mp_alloc} are @code{int}, although @code{mp_size_t}
9871 is usually a @code{long}. This is done to make the fields just 32 bits on
9872 some 64 bits systems, thereby saving a few bytes of data space but still
9873 providing plenty of range.
9876 @node Rational Internals, Float Internals, Integer Internals, Internals
9877 @section Rational Internals
9878 @cindex Rational internals
9880 @code{mpq_t} variables represent rationals using an @code{mpz_t} numerator and
9881 denominator (@pxref{Integer Internals}).
9883 The canonical form adopted is denominator positive (and non-zero), no common
9884 factors between numerator and denominator, and zero uniquely represented as
9887 It's believed that casting out common factors at each stage of a calculation
9888 is best in general. A GCD is an @math{O(N^2)} operation so it's better to do
9889 a few small ones immediately than to delay and have to do a big one later.
9890 Knowing the numerator and denominator have no common factors can be used for
9891 example in @code{mpq_mul} to make only two cross GCDs necessary, not four.
9893 This general approach to common factors is badly sub-optimal in the presence
9894 of simple factorizations or little prospect for cancellation, but GMP has no
9895 way to know when this will occur. As per @ref{Efficiency}, that's left to
9896 applications. The @code{mpq_t} framework might still suit, with
9897 @code{mpq_numref} and @code{mpq_denref} for direct access to the numerator and
9898 denominator, or of course @code{mpz_t} variables can be used directly.
9901 @node Float Internals, Raw Output Internals, Rational Internals, Internals
9902 @section Float Internals
9903 @cindex Float internals
9905 Efficient calculation is the primary aim of GMP floats and the use of whole
9906 limbs and simple rounding facilitates this.
9908 @code{mpf_t} floats have a variable precision mantissa and a single machine
9909 word signed exponent. The mantissa is represented using sign and magnitude.
9911 @c FIXME: The arrow heads don't join to the lines exactly.
9913 \global\newdimen\GMPboxwidth \GMPboxwidth=5em
9914 \global\newdimen\GMPboxheight \GMPboxheight=3ex
9915 \def\centreline{\hbox{\raise 0.8ex \vbox{\hrule \hbox{\hfil}}}}
9918 \hbox to 5\GMPboxwidth {most significant limb \hfil least significant limb}
9920 \def\GMPcentreline#1{\hbox{\raise 0.5 ex \vbox{\hrule \hbox to #1 {}}}}
9922 \hbox to 3\GMPboxwidth {%
9923 \setbox 0 = \hbox{@code{\_mp\_exp}}%
9924 \dimen0=3\GMPboxwidth
9925 \advance\dimen0 by -\wd0
9927 \advance\dimen0 by -1em
9928 \setbox1 = \hbox{$\rightarrow$}%
9930 \advance\dimen1 by -\wd1
9931 \GMPcentreline{\dimen0}%
9935 \GMPcentreline{\dimen1{}}%
9937 \hbox to 2\GMPboxwidth {\hfil @code{\_mp\_d}}}
9942 \vrule height 2ex depth 1ex
9943 \hbox to \GMPboxwidth {}%
9945 \hbox to \GMPboxwidth {}%
9947 \hbox to \GMPboxwidth {}%
9949 \hbox to \GMPboxwidth {}%
9951 \hbox to \GMPboxwidth {}%
9957 \hbox to 3\GMPboxwidth {%
9958 \hfil $\cdot$} \hbox {$\leftarrow$ radix point\hfil}}
9959 \hbox to 5\GMPboxwidth{%
9960 \setbox 0 = \hbox{@code{\_mp\_size}}%
9961 \dimen0 = 5\GMPboxwidth
9962 \advance\dimen0 by -\wd0
9964 \advance\dimen0 by -1em
9966 \setbox1 = \hbox{$\leftarrow$}%
9967 \setbox2 = \hbox{$\rightarrow$}%
9968 \advance\dimen0 by -\wd1
9969 \advance\dimen1 by -\wd2
9972 \GMPcentreline{\dimen0}%
9976 \GMPcentreline{\dimen1}%
9983 significant significant
9987 |---- _mp_exp ---> |
9988 _____ _____ _____ _____ _____
9989 |_____|_____|_____|_____|_____|
9990 . <------------ radix point
9992 <-------- _mp_size --------->
9998 The fields are as follows.
10001 @item @code{_mp_size}
10002 The number of limbs currently in use, or the negative of that when
10003 representing a negative value. Zero is represented by @code{_mp_size} and
10004 @code{_mp_exp} both set to zero, and in that case the @code{_mp_d} data is
10005 unused. (In the future @code{_mp_exp} might be undefined when representing
10008 @item @code{_mp_prec}
10009 The precision of the mantissa, in limbs. In any calculation the aim is to
10010 produce @code{_mp_prec} limbs of result (the most significant being non-zero).
10013 A pointer to the array of limbs which is the absolute value of the mantissa.
10014 These are stored ``little endian'' as per the @code{mpn} functions, so
10015 @code{_mp_d[0]} is the least significant limb and
10016 @code{_mp_d[ABS(_mp_size)-1]} the most significant.
10018 The most significant limb is always non-zero, but there are no other
10019 restrictions on its value, in particular the highest 1 bit can be anywhere
10022 @code{_mp_prec+1} limbs are allocated to @code{_mp_d}, the extra limb being
10023 for convenience (see below). There are no reallocations during a calculation,
10024 only in a change of precision with @code{mpf_set_prec}.
10026 @item @code{_mp_exp}
10027 The exponent, in limbs, determining the location of the implied radix point.
10028 Zero means the radix point is just above the most significant limb. Positive
10029 values mean a radix point offset towards the lower limbs and hence a value
10030 @math{@ge{} 1}, as for example in the diagram above. Negative exponents mean
10031 a radix point further above the highest limb.
10033 Naturally the exponent can be any value, it doesn't have to fall within the
10034 limbs as the diagram shows, it can be a long way above or a long way below.
10035 Limbs other than those included in the @code{@{_mp_d,_mp_size@}} data
10036 are treated as zero.
10039 The @code{_mp_size} and @code{_mp_prec} fields are @code{int}, although the
10040 @code{mp_size_t} type is usually a @code{long}. The @code{_mp_exp} field is
10041 usually @code{long}. This is done to make some fields just 32 bits on some 64
10042 bits systems, thereby saving a few bytes of data space but still providing
10043 plenty of precision and a very large range.
10048 The following various points should be noted.
10052 The least significant limbs @code{_mp_d[0]} etc can be zero, though such low
10053 zeros can always be ignored. Routines likely to produce low zeros check and
10054 avoid them to save time in subsequent calculations, but for most routines
10055 they're quite unlikely and aren't checked.
10057 @item Mantissa Size Range
10058 The @code{_mp_size} count of limbs in use can be less than @code{_mp_prec} if
10059 the value can be represented in less. This means low precision values or
10060 small integers stored in a high precision @code{mpf_t} can still be operated
10063 @code{_mp_size} can also be greater than @code{_mp_prec}. Firstly a value is
10064 allowed to use all of the @code{_mp_prec+1} limbs available at @code{_mp_d},
10065 and secondly when @code{mpf_set_prec_raw} lowers @code{_mp_prec} it leaves
10066 @code{_mp_size} unchanged and so the size can be arbitrarily bigger than
10070 All rounding is done on limb boundaries. Calculating @code{_mp_prec} limbs
10071 with the high non-zero will ensure the application requested minimum precision
10074 The use of simple ``trunc'' rounding towards zero is efficient, since there's
10075 no need to examine extra limbs and increment or decrement.
10078 Since the exponent is in limbs, there are no bit shifts in basic operations
10079 like @code{mpf_add} and @code{mpf_mul}. When differing exponents are
10080 encountered all that's needed is to adjust pointers to line up the relevant
10083 Of course @code{mpf_mul_2exp} and @code{mpf_div_2exp} will require bit shifts,
10084 but the choice is between an exponent in limbs which requires shifts there, or
10085 one in bits which requires them almost everywhere else.
10087 @item Use of @code{_mp_prec+1} Limbs
10088 The extra limb on @code{_mp_d} (@code{_mp_prec+1} rather than just
10089 @code{_mp_prec}) helps when an @code{mpf} routine might get a carry from its
10090 operation. @code{mpf_add} for instance will do an @code{mpn_add} of
10091 @code{_mp_prec} limbs. If there's no carry then that's the result, but if
10092 there is a carry then it's stored in the extra limb of space and
10093 @code{_mp_size} becomes @code{_mp_prec+1}.
10095 Whenever @code{_mp_prec+1} limbs are held in a variable, the low limb is not
10096 needed for the intended precision, only the @code{_mp_prec} high limbs. But
10097 zeroing it out or moving the rest down is unnecessary. Subsequent routines
10098 reading the value will simply take the high limbs they need, and this will be
10099 @code{_mp_prec} if their target has that same precision. This is no more than
10100 a pointer adjustment, and must be checked anyway since the destination
10101 precision can be different from the sources.
10103 Copy functions like @code{mpf_set} will retain a full @code{_mp_prec+1} limbs
10104 if available. This ensures that a variable which has @code{_mp_size} equal to
10105 @code{_mp_prec+1} will get its full exact value copied. Strictly speaking
10106 this is unnecessary since only @code{_mp_prec} limbs are needed for the
10107 application's requested precision, but it's considered that an @code{mpf_set}
10108 from one variable into another of the same precision ought to produce an exact
10111 @item Application Precisions
10112 @code{__GMPF_BITS_TO_PREC} converts an application requested precision to an
10113 @code{_mp_prec}. The value in bits is rounded up to a whole limb then an
10114 extra limb is added since the most significant limb of @code{_mp_d} is only
10115 non-zero and therefore might contain only one bit.
10117 @code{__GMPF_PREC_TO_BITS} does the reverse conversion, and removes the extra
10118 limb from @code{_mp_prec} before converting to bits. The net effect of
10119 reading back with @code{mpf_get_prec} is simply the precision rounded up to a
10120 multiple of @code{mp_bits_per_limb}.
10122 Note that the extra limb added here for the high only being non-zero is in
10123 addition to the extra limb allocated to @code{_mp_d}. For example with a
10124 32-bit limb, an application request for 250 bits will be rounded up to 8
10125 limbs, then an extra added for the high being only non-zero, giving an
10126 @code{_mp_prec} of 9. @code{_mp_d} then gets 10 limbs allocated. Reading
10127 back with @code{mpf_get_prec} will take @code{_mp_prec} subtract 1 limb and
10128 multiply by 32, giving 256 bits.
10130 Strictly speaking, the fact the high limb has at least one bit means that a
10131 float with, say, 3 limbs of 32-bits each will be holding at least 65 bits, but
10132 for the purposes of @code{mpf_t} it's considered simply to be 64 bits, a nice
10133 multiple of the limb size.
10137 @node Raw Output Internals, C++ Interface Internals, Float Internals, Internals
10138 @section Raw Output Internals
10139 @cindex Raw output internals
10142 @code{mpz_out_raw} uses the following format.
10145 \global\newdimen\GMPboxwidth \GMPboxwidth=5em
10146 \global\newdimen\GMPboxheight \GMPboxheight=3ex
10147 \def\centreline{\hbox{\raise 0.8ex \vbox{\hrule \hbox{\hfil}}}}
10150 \def\GMPcentreline#1{\hbox{\raise 0.5 ex \vbox{\hrule \hbox to #1 {}}}}
10154 \vrule height 2.5ex depth 1.5ex
10155 \hbox to \GMPboxwidth {\hfil size\hfil}%
10157 \hbox to 3\GMPboxwidth {\hfil data bytes\hfil}%
10164 +------+------------------------+
10165 | size | data bytes |
10166 +------+------------------------+
10170 The size is 4 bytes written most significant byte first, being the number of
10171 subsequent data bytes, or the twos complement negative of that when a negative
10172 integer is represented. The data bytes are the absolute value of the integer,
10173 written most significant byte first.
10175 The most significant data byte is always non-zero, so the output is the same
10176 on all systems, irrespective of limb size.
10178 In GMP 1, leading zero bytes were written to pad the data bytes to a multiple
10179 of the limb size. @code{mpz_inp_raw} will still accept this, for
10182 The use of ``big endian'' for both the size and data fields is deliberate, it
10183 makes the data easy to read in a hex dump of a file. Unfortunately it also
10184 means that the limb data must be reversed when reading or writing, so neither
10185 a big endian nor little endian system can just read and write @code{_mp_d}.
10188 @node C++ Interface Internals, , Raw Output Internals, Internals
10189 @section C++ Interface Internals
10190 @cindex C++ interface internals
10192 A system of expression templates is used to ensure something like @code{a=b+c}
10193 turns into a simple call to @code{mpz_add} etc. For @code{mpf_class}
10194 the scheme also ensures the precision of the final
10195 destination is used for any temporaries within a statement like
10196 @code{f=w*x+y*z}. These are important features which a naive implementation
10199 A simplified description of the scheme follows. The true scheme is
10200 complicated by the fact that expressions have different return types. For
10201 detailed information, refer to the source code.
10203 To perform an operation, say, addition, we first define a ``function object''
10207 struct __gmp_binary_plus
10209 static void eval(mpf_t f, mpf_t g, mpf_t h) @{ mpf_add(f, g, h); @}
10214 And an ``additive expression'' object,
10217 __gmp_expr<__gmp_binary_expr<mpf_class, mpf_class, __gmp_binary_plus> >
10218 operator+(const mpf_class &f, const mpf_class &g)
10221 <__gmp_binary_expr<mpf_class, mpf_class, __gmp_binary_plus> >(f, g);
10225 The seemingly redundant @code{__gmp_expr<__gmp_binary_expr<@dots{}>>} is used to
10226 encapsulate any possible kind of expression into a single template type. In
10227 fact even @code{mpf_class} etc are @code{typedef} specializations of
10230 Next we define assignment of @code{__gmp_expr} to @code{mpf_class}.
10234 mpf_class & mpf_class::operator=(const __gmp_expr<T> &expr)
10236 expr.eval(this->get_mpf_t(), this->precision());
10240 template <class Op>
10241 void __gmp_expr<__gmp_binary_expr<mpf_class, mpf_class, Op> >::eval
10242 (mpf_t f, mp_bitcnt_t precision)
10244 Op::eval(f, expr.val1.get_mpf_t(), expr.val2.get_mpf_t());
10248 where @code{expr.val1} and @code{expr.val2} are references to the expression's
10249 operands (here @code{expr} is the @code{__gmp_binary_expr} stored within the
10250 @code{__gmp_expr}).
10252 This way, the expression is actually evaluated only at the time of assignment,
10253 when the required precision (that of @code{f}) is known. Furthermore the
10254 target @code{mpf_t} is now available, thus we can call @code{mpf_add} directly
10255 with @code{f} as the output argument.
10257 Compound expressions are handled by defining operators taking subexpressions
10258 as their arguments, like this:
10261 template <class T, class U>
10263 <__gmp_binary_expr<__gmp_expr<T>, __gmp_expr<U>, __gmp_binary_plus> >
10264 operator+(const __gmp_expr<T> &expr1, const __gmp_expr<U> &expr2)
10267 <__gmp_binary_expr<__gmp_expr<T>, __gmp_expr<U>, __gmp_binary_plus> >
10272 And the corresponding specializations of @code{__gmp_expr::eval}:
10275 template <class T, class U, class Op>
10277 <__gmp_binary_expr<__gmp_expr<T>, __gmp_expr<U>, Op> >::eval
10278 (mpf_t f, mp_bitcnt_t precision)
10280 // declare two temporaries
10281 mpf_class temp1(expr.val1, precision), temp2(expr.val2, precision);
10282 Op::eval(f, temp1.get_mpf_t(), temp2.get_mpf_t());
10286 The expression is thus recursively evaluated to any level of complexity and
10287 all subexpressions are evaluated to the precision of @code{f}.
10290 @node Contributors, References, Internals, Top
10291 @comment node-name, next, previous, up
10292 @appendix Contributors
10293 @cindex Contributors
10295 Torbj@"orn Granlund wrote the original GMP library and is still the main
10296 developer. Code not explicitly attributed to others, was contributed by
10297 Torbj@"orn. Several other individuals and organizations have contributed
10298 GMP. Here is a list in chronological order on first contribution:
10300 Gunnar Sj@"odin and Hans Riesel helped with mathematical problems in early
10301 versions of the library.
10303 Richard Stallman helped with the interface design and revised the first
10304 version of this manual.
10306 Brian Beuning and Doug Lea helped with testing of early versions of the
10307 library and made creative suggestions.
10309 John Amanatides of York University in Canada contributed the function
10310 @code{mpz_probab_prime_p}.
10312 Paul Zimmermann wrote the REDC-based mpz_powm code, the Sch@"onhage-Strassen
10313 FFT multiply code, and the Karatsuba square root code. He also improved the
10314 Toom3 code for GMP 4.2. Paul sparked the development of GMP 2, with his
10315 comparisons between bignum packages. The ECMNET project Paul is organizing
10316 was a driving force behind many of the optimizations in GMP 3. Paul also
10317 wrote the new GMP 4.3 nth root code (with Torbj@"orn).
10319 Ken Weber (Kent State University, Universidade Federal do Rio Grande do Sul)
10320 contributed now defunct versions of @code{mpz_gcd}, @code{mpz_divexact},
10321 @code{mpn_gcd}, and @code{mpn_bdivmod}, partially supported by CNPq (Brazil)
10324 Per Bothner of Cygnus Support helped to set up GMP to use Cygnus' configure.
10325 He has also made valuable suggestions and tested numerous intermediary
10328 Joachim Hollman was involved in the design of the @code{mpf} interface, and in
10329 the @code{mpz} design revisions for version 2.
10331 Bennet Yee contributed the initial versions of @code{mpz_jacobi} and
10332 @code{mpz_legendre}.
10334 Andreas Schwab contributed the files @file{mpn/m68k/lshift.S} and
10335 @file{mpn/m68k/rshift.S} (now in @file{.asm} form).
10337 Robert Harley of Inria, France and David Seal of ARM, England, suggested clever
10338 improvements for population count. Robert also wrote highly optimized
10339 Karatsuba and 3-way Toom multiplication functions for GMP 3, and contributed
10340 the ARM assembly code.
10342 Torsten Ekedahl of the Mathematical department of Stockholm University provided
10343 significant inspiration during several phases of the GMP development. His
10344 mathematical expertise helped improve several algorithms.
10346 Linus Nordberg wrote the new configure system based on autoconf and
10347 implemented the new random functions.
10349 Kevin Ryde worked on a large number of things: optimized x86 code, m4 asm
10350 macros, parameter tuning, speed measuring, the configure system, function
10351 inlining, divisibility tests, bit scanning, Jacobi symbols, Fibonacci and Lucas
10352 number functions, printf and scanf functions, perl interface, demo expression
10353 parser, the algorithms chapter in the manual, @file{gmpasm-mode.el}, and
10354 various miscellaneous improvements elsewhere.
10356 Kent Boortz made the Mac OS 9 port.
10358 Steve Root helped write the optimized alpha 21264 assembly code.
10360 Gerardo Ballabio wrote the @file{gmpxx.h} C++ class interface and the C++
10361 @code{istream} input routines.
10363 Jason Moxham rewrote @code{mpz_fac_ui}.
10365 Pedro Gimeno implemented the Mersenne Twister and made other random number
10368 Niels M@"oller wrote the sub-quadratic GCD and extended GCD code, the
10369 quadratic Hensel division code, and (with Torbj@"orn) the new divide and
10370 conquer division code for GMP 4.3. Niels also helped implement the new Toom
10371 multiply code for GMP 4.3 and implemented helper functions to simplify Toom
10372 evaluations for GMP 5.0. He wrote the original version of mpn_mulmod_bnm1.
10374 Alberto Zanoni and Marco Bodrato suggested the unbalanced multiply strategy,
10375 and found the optimal strategies for evaluation and interpolation in Toom
10378 Marco Bodrato helped implement the new Toom multiply code for GMP 4.3 and
10379 implemented most of the new Toom multiply and squaring code for 5.0.
10380 He is the main author of the current mpn_mulmod_bnm1 and mpn_mullo_n. Marco
10381 also wrote the functions mpn_invert and mpn_invertappr.
10383 David Harvey suggested the internal function @code{mpn_bdiv_dbm1}, implementing
10384 division relevant to Toom multiplication. He also worked on fast assembly
10385 sequences, in particular on a fast AMD64 @code{mpn_mul_basecase}.
10387 Martin Boij wrote @code{mpn_perfect_power_p}.
10389 (This list is chronological, not ordered after significance. If you have
10390 contributed to GMP but are not listed above, please tell
10391 @email{gmp-devel@@gmplib.org} about the omission!)
10393 The development of floating point functions of GNU MP 2, were supported in part
10394 by the ESPRIT-BRA (Basic Research Activities) 6846 project POSSO (POlynomial
10397 The development of GMP 2, 3, and 4 was supported in part by the IDA Center for
10398 Computing Sciences.
10400 Thanks go to Hans Thorsen for donating an SGI system for the GMP test system
10403 @node References, GNU Free Documentation License, Contributors, Top
10404 @comment node-name, next, previous, up
10405 @appendix References
10408 @c FIXME: In tex, the @uref's are unhyphenated, which is good for clarity,
10409 @c but being long words they upset paragraph formatting (the preceding line
10410 @c can get badly stretched). Would like an conditional @* style line break
10411 @c if the uref is too long to fit on the last line of the paragraph, but it's
10412 @c not clear how to do that. For now explicit @texlinebreak{}s are used on
10413 @c paragraphs that come out bad.
10419 Jonathan M. Borwein and Peter B. Borwein, ``Pi and the AGM: A Study in
10420 Analytic Number Theory and Computational Complexity'', Wiley, 1998.
10423 Richard Crandall and Carl Pomerance, ``Prime Numbers: A Computational
10424 Perspective'', 2nd edition, Springer-Verlag, 2005.
10425 @texlinebreak{} @uref{http://math.dartmouth.edu/~carlp/}
10428 Henri Cohen, ``A Course in Computational Algebraic Number Theory'', Graduate
10429 Texts in Mathematics number 138, Springer-Verlag, 1993.
10430 @texlinebreak{} @uref{http://www.math.u-bordeaux.fr/~cohen/}
10433 Donald E. Knuth, ``The Art of Computer Programming'', volume 2,
10434 ``Seminumerical Algorithms'', 3rd edition, Addison-Wesley, 1998.
10435 @texlinebreak{} @uref{http://www-cs-faculty.stanford.edu/~knuth/taocp.html}
10438 John D. Lipson, ``Elements of Algebra and Algebraic Computing'',
10439 The Benjamin Cummings Publishing Company Inc, 1981.
10442 Alfred J. Menezes, Paul C. van Oorschot and Scott A. Vanstone, ``Handbook of
10443 Applied Cryptography'', @uref{http://www.cacr.math.uwaterloo.ca/hac/}
10446 Richard M. Stallman and the GCC Developer Community, ``Using the GNU Compiler
10447 Collection'', Free Software Foundation, 2008, available online
10448 @uref{http://gcc.gnu.org/onlinedocs/}, and in the GCC package
10449 @uref{ftp://ftp.gnu.org/gnu/gcc/}
10456 Yves Bertot, Nicolas Magaud and Paul Zimmermann, ``A Proof of GMP Square
10457 Root'', Journal of Automated Reasoning, volume 29, 2002, pp.@: 225-252. Also
10458 available online as INRIA Research Report 4475, June 2001,
10459 @uref{http://www.inria.fr/rrrt/rr-4475.html}
10462 Christoph Burnikel and Joachim Ziegler, ``Fast Recursive Division'',
10463 Max-Planck-Institut fuer Informatik Research Report MPI-I-98-1-022,
10464 @texlinebreak{} @uref{http://data.mpi-sb.mpg.de/internet/reports.nsf/NumberView/1998-1-022}
10467 Torbj@"orn Granlund and Peter L. Montgomery, ``Division by Invariant Integers
10468 using Multiplication'', in Proceedings of the SIGPLAN PLDI'94 Conference, June
10469 1994. Also available @uref{ftp://ftp.cwi.nl/pub/pmontgom/divcnst.psa4.gz}
10473 Niels M@"oller and Torbj@"orn Granlund, ``Improved division by invariant
10474 integers'', to appear.
10477 Torbj@"orn Granlund and Niels M@"oller, ``Division of integers large and
10478 small'', to appear.
10482 ``An algorithm for exact division'',
10483 Journal of Symbolic Computation,
10484 volume 15, 1993, pp.@: 169-180.
10485 Research report version available @texlinebreak{}
10486 @uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1992/92-35.ps.gz}
10489 Tudor Jebelean, ``Exact Division with Karatsuba Complexity - Extended
10490 Abstract'', RISC-Linz technical report 96-31, @texlinebreak{}
10491 @uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1996/96-31.ps.gz}
10494 Tudor Jebelean, ``Practical Integer Division with Karatsuba Complexity'',
10495 ISSAC 97, pp.@: 339-341. Technical report available @texlinebreak{}
10496 @uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1996/96-29.ps.gz}
10499 Tudor Jebelean, ``A Generalization of the Binary GCD Algorithm'', ISSAC 93,
10500 pp.@: 111-116. Technical report version available @texlinebreak{}
10501 @uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1993/93-01.ps.gz}
10504 Tudor Jebelean, ``A Double-Digit Lehmer-Euclid Algorithm for Finding the GCD
10505 of Long Integers'', Journal of Symbolic Computation, volume 19, 1995,
10506 pp.@: 145-157. Technical report version also available @texlinebreak{}
10507 @uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1992/92-69.ps.gz}
10510 Werner Krandick and Tudor Jebelean, ``Bidirectional Exact Integer Division'',
10511 Journal of Symbolic Computation, volume 21, 1996, pp.@: 441-455. Early
10512 technical report version also available
10513 @uref{ftp://ftp.risc.uni-linz.ac.at/pub/techreports/1994/94-50.ps.gz}
10516 Makoto Matsumoto and Takuji Nishimura, ``Mersenne Twister: A 623-dimensionally
10517 equidistributed uniform pseudorandom number generator'', ACM Transactions on
10518 Modelling and Computer Simulation, volume 8, January 1998, pp.@: 3-30.
10519 Available online @texlinebreak{}
10520 @uref{http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/ARTICLES/mt.ps.gz} (or .pdf)
10523 R. Moenck and A. Borodin, ``Fast Modular Transforms via Division'',
10524 Proceedings of the 13th Annual IEEE Symposium on Switching and Automata
10525 Theory, October 1972, pp.@: 90-96. Reprinted as ``Fast Modular Transforms'',
10526 Journal of Computer and System Sciences, volume 8, number 3, June 1974,
10530 Niels M@"oller, ``On Sch@"onhage's algorithm and subquadratic integer GCD
10531 computation'', in Mathematics of Computation, volume 77, January 2008, pp.@:
10535 Peter L. Montgomery, ``Modular Multiplication Without Trial Division'', in
10536 Mathematics of Computation, volume 44, number 170, April 1985.
10539 Arnold Sch@"onhage and Volker Strassen, ``Schnelle Multiplikation grosser
10540 Zahlen'', Computing 7, 1971, pp.@: 281-292.
10543 Kenneth Weber, ``The accelerated integer GCD algorithm'',
10544 ACM Transactions on Mathematical Software,
10545 volume 21, number 1, March 1995, pp.@: 111-122.
10548 Paul Zimmermann, ``Karatsuba Square Root'', INRIA Research Report 3805,
10549 November 1999, @uref{http://www.inria.fr/rrrt/rr-3805.html}
10552 Paul Zimmermann, ``A Proof of GMP Fast Division and Square Root
10553 Implementations'', @texlinebreak{}
10554 @uref{http://www.loria.fr/~zimmerma/papers/proof-div-sqrt.ps.gz}
10557 Dan Zuras, ``On Squaring and Multiplying Large Integers'', ARITH-11: IEEE
10558 Symposium on Computer Arithmetic, 1993, pp.@: 260 to 271. Reprinted as ``More
10559 on Multiplying and Squaring Large Integers'', IEEE Transactions on Computers,
10560 volume 43, number 8, August 1994, pp.@: 899-908.
10564 @node GNU Free Documentation License, Concept Index, References, Top
10565 @appendix GNU Free Documentation License
10566 @cindex GNU Free Documentation License
10567 @cindex Free Documentation License
10568 @cindex Documentation license
10569 @include fdl-1.3.texi
10572 @node Concept Index, Function Index, GNU Free Documentation License, Top
10573 @comment node-name, next, previous, up
10574 @unnumbered Concept Index
10577 @node Function Index, , Concept Index, Top
10578 @comment node-name, next, previous, up
10579 @unnumbered Function and Type Index
10584 @c Local variables:
10586 @c compile-command: "make gmp.info"