Programming with multiple precision pdf


















There is no practical limit to the precision except the ones implied by the available memory in the machine GMP runs on. GMP has a rich set of functions, and the functions have a regular interface. The main target applications for GMP are cryptography applications and research, Internet security applications, algebra systems, computational algebra research, etc.

GMP is carefully designed to be as fast as possible, both for small operands and for huge operands. The speed is achieved by using fullwords as the basic arithmetic type, by using fast algorithms, with highly optimised assembly code for the most common inner loops for a lot of CPUs, and by a general emphasis on speed. The first GMP release was made in It is continually developed and maintained, with a new release about once a year. These licenses make the library free to use, share, and improve, and allow you to pass on the result.

The GNU licenses give freedoms, but also set firm restrictions on the use with non-free programs. It also is known to work on Windows in both bit and bit mode.

GMP is carefully developed and maintained, both technically and legally. We of course inspect and test contributed code carefully, but equally importantly we make sure we have the legal right to distribute the contributions, meaning users can safely use GMP. To achieve this, we will ask contributors to sign paperwork where they allow us to distribute their work. A short summary of this paper. Download Download PDF. Translate PDF. OpenMP parallelization of multiple precision Taylor series method S.

Dimova1 , I. Hristov1,a , R. Hristova1 , I. Puzynin2 , T. Puzynina2 , Z. Sharipov2,b , N. Shegunov1 , Z. A very good parallel performance scalability and parallel efficiency inside one computation node of a CPU-cluster is observed. We explain the details of the parallelization on the classical example of the Lorentz equations.

The same ap- proach can be applied straightforwardly to a large class of chaotic dynamical sys- tems. No doubt, having a numerical procedure for achieving such solutions is of a big im- portance, because it gives us a powerful tool for theoretical investigations. A break- through in this direction can be found in the paper [1] of Shijun Liao. The numerical procedure in [1] works as follows. A new concept, namely the crit- ical predictable time Tc is introduced. Using the estimated K and N for a given time interval, the solution is calculated.

The solution obtained is additionally verified by a new calculation with larger K and N over the same interval. If the two solutions coincide over the whole interval, the solution is considered to be a mathematically reliable one.

First, we have to use a multiple precision library. In order to be effective, we need a method of highest order of accuracy, such as Taylor series method. Dimova, I. Hristov, et al. In addition, if we want a solution in the case of extremely large intervals, we need a serious computational resource and a parallelization of the algorithm. A mathematically reliable solution of the Lorenz system using CPU cores, obtained in about 9 days and 5 hours, on a time interval with a record length, namely [0,], is given in [6].

It is explained in [4],[5] that a parallel reduction of the sums, which appear when we calculate the Taylor coefficients, have to be done. This is of course the crucial observation.

However, no details of the parallel version of the algorithm is given. Our goal is not to compare to the impressive simulation in [6], which uses pretty large computational resource. Our goal is to present in more details a simple and effective OpenMP parallelization of the multiple precision Taylor series method, which uses a moderate computational resource, namely one CPU-node.

For bench- marks, we use the results in [5]. It is arithmetic carried out in software at higher precision than provided by hardware. This provides for bit, bit, and optionally, either bit or bit formats. These encode a 1-bit sign, a biased power-of-two exponent 8, 11, 15, and 15 bits respectively , and a significand 24, 53, 64, and bits respectively capable of representing approximately 7, 15, 19, and 34 decimal digits respectively.

Although the IEEE hardware precisions suffice for many practical purposes, there are many areas of computation where higher precision is required. Three simple examples where higher-precision arithmetic is required are in the conversion between decimal and binary number bases, the computation of exactly rounded elementary functions, and the computation of vector dot products:.

Two recent books, Experimentation in mathematics: computational paths to discovery ISBN and Mathematics by Experiment: Plausible Reasoning in the 21st Century ISBN , show how high-precision computation can lead to fundamental new discoveries in mathematics, and be essential for the solution of some important physical problems. What programming languages provide native multiple-precision arithmetic?



0コメント

  • 1000 / 1000