computer algebra

5 downloads 0 Views 2MB Size Report
lists. An integer is represented as a list of digits. For ... tion, a polynomial is a list of coefficients, starting from some highest ...... niques usually build virtual worlds in which characters ... Jurassic Park were computer generated and then.
282

COMPUTER ALGEBRA: PRINCIPLES

policies, and rules, they too can be stored and used to plan and implementthe redevelopment of the existing system. While thenewsystem isbeingdeveloped, changes will still be made to thelegacy system. A notification of these changesshouldbemade to the redevelopment teamfor evaluation as possiblerequirement changes to the new system.

Summary A complete CASE environment requires all the components depicted in Fig. 5 . Most organizations already have allthese components in some form. Manual techniques are legitimate alternatives to automation and need tobeconsideredwhen assessing a software redevelopment environment. CASE provides computer-aided alternatives to thecomponents of Fig. 5 . What is lacking in most tool environments is the integration of the individual tools and the integration and coordination of the tools with the project team members. Toolsets organized along traditional job functions will, at least initially, create less resistance on the part of users. Software professionals want tools to help them do what they do now, only better, not tools that force them to do their jobs differently. Oncethe first waveof technology is assimilated, the inherent characteristics of the technology will influence the process more and more. Improving software quality and developer productivity starts with specific quality and productivity objectives and assessing the software engineering process relative to those objectives. Tools are then chosen thatcomplementthe engineering process and contribute to fulfilling the objectives. Overall the greatest leverage will be realized when theentire software development and maintenance process, the total softwave life cycle, isfully automated in a seamless environment for the software engineer. A complete CASE environment is a carefully configured and integrated system of automated tools applied to the entire software life cycle for each unique software development, maintenance, or redevelopment problem.

COMPUTER ALGEBRA PRINCIPLES

Computev algebra is a branch of scientific computation. There are several characteristic features that distinguish computer algebra from numericalanalysis, the other principal branch of scientific computation. (1) Computer algebra involves computation inalgebraic structures,such as finitely presented groups, polynomial rings, rational function fields, algebraicand transcendental extensions of the rational numbers, or differential and difference fields. (2) Computer algebra manipulates formulas. Whereas in numericalcomputation the input and output of algorithms are basically (integer or floating point) numbers, the input and output of computer algebra algorithms are generally formulas. So, typically, instead of computing /;I2

dx = -0.1438.. . ,

an integration algorithm in computer algebra yields

x /-dx= x2 - 1

In lx2 - 11 2 '

(3) Computations in computer algebra are carried through exactly (Le. no approximations are applied at any step). So, typically, the solutions of a system of algebraic equations such as

x4 + 2x2y2 + 3x2y + y4 - y3

=0

x2+y2-1=o

arepresented as ( 0 , l), ( k t ,-1/2), instead of ( 0 , l), (hO.86602 ' . . , - 0.5). Because of the exact nature of the computations in computer algebra, decision procedures can be derived from such algorithms that decide, for example, the solvability of systems of algebraic equations, the solvability of integration problems in a specified class of formulas, or the validity of geometric formulas.

Bibliography 1990. Humphrey, W. Managing the Software Process. Reading, MA: Addison-Wesley. 1996. Oman, P., and Pfleeger, S. L. Applying Software Metrics. Los Alamitos, CA: IEEE Computer Society Press. 1997. Pigoski, T. M. Practical Software Maintenance. Los Alamitos, CA: IEEE Computer Society Press. 1997. Reifer, D. J. Software Management, 5th Ed. Los Alamitos, CA: IEEE Computer Society Press. 1997. Sharon, D. "A Complete Software Engineering Environment," IEEE Software (March/April), 123-125. 1997. Thayer, R. H., and Merlin, D. Software Requirements Engineering, 2nd Ed. Los Alamitos, CA: IEEE Computer Society Press.

David Sharon

Applications of Computer Algebra THE PIANO MOVERS PROBLEM Many problems in robotics (4.v.) can be modeled by the piano movers problem: finding a path that will take a given body B from a given initial position to a desired final position. The additional constraint is that along the path the body should not hit any obstacles, such as walls or other bodies. A simple example in the planeis shown inFig. 1. Theinitial and finalpositions of the body B are drawn in full, whereas a possible intermediate positionis drawn in dotted lines.

283

COMPUTER ALGEBRA: PRINCIPLES

by a vector of length 2 , any such position of B can be specifiedby 10 coefficients (Le. a point in R’O). Some of these possible positions are illegal, since the body B would intersect or lie outside of the boundaries. If the legal positions L (C R’O) can be described by polynomial equations and inequalities then L is a semialgebraic set.

I

Figure 1. The piano movers problem.

J. T. Schwartzand M. Sharir have shownhowto reduce this problem to a certain problem about semialgebraic sets that can be solved by Collins’cylindrical algebraic decomposition (cad) method. Semialgebraic sets are subsets of a real m-dimensional space Rm that can be cut out by polynomial equations and inequalities. That is, start with simple sets of the form ((x1 , . . . , x , ) / P ( x I , . . . , ~ ~ ) = O }

or {(XI,.

. . , xm) I XI,. . . , xm) > O } ,

where p , q are polynomials with real coefficients, and allow the construction of more complicated sets by means of intersection, union,and difference. Any subset of Rm that can bedefined in this way is called a semialgebraic set. Consider a two-dimensionalproblem, as in Fig. 1. Starting from some fixed position of the body B (say P at theorigin, where Pis the point at which the parts of B are joined together) in R2, obtain an arbitrary position of B by applying a rotationTI to partB2, a rotation T2 to B (Fig. 2 ) , and afterwards a translation T3 to B. Since T I , T2 can be described by 2 x 2 matrices and T3

Thepianomoversproblemisnow reducedto the question of whether two points P I , P2 in L can be joined by a path in L (Le. whether PI and P2 lie in the same connected component of L ) . This question can be decided byCollins’cad method, whichmakes heavy use of computer algebra algorithms. In particular, the cad method usesalgorithms for greatest common divisors of polynomials, factorization of polynomials into square-free factors,resultant computations, and isolation of the real roots of polynomials.

ALGORITHMICMETHODS IN GEOMETRY Often, a geometric statement can bedescribed by polynomial equations over some ground field K, such as the real or complex numbers. Consider, for instance, the statement, “The intersection of its altitude with thehypotenuse of aright-angledtriangleandthe midpoints of the three sides of the triangle lie on a circle” (Fig. 3). Once the geometric figure is placed into a coordinate system, it can be described by polynomial equations. For instance, the fact that E is the midpoint of the side ACis expressed by the equation 2y3 - y1 = 0 ; the fact that the line segments EM and FM are of equal length is expressed by the equation (y7 - ~ 3 )y i -~ (y7 - y4)’ - (ys - ~ 5 =) 0;~and so on. In this way, the system hl = . . = h, = 0 of polynomial equations in the indeterminates y1, . . . ,yn determines thegeometric figure. Call these polynomials the hypothesis polynomials. The equation ( y 7 - ~ 3 )y i ~- (y7 - ~ 9 ) ~ (y6 - ~ 1 0 ) ’ = 0 then states that the line segments HM and EM are also of equal length. Call this polynomial the conclusion polynomial.

+

+

t

Figure 2.

Figure 3.

-

284

COMPUTER ALGEBRA PRINCIPLES

The problem of proving the geometric statementis now reduced to theproblem of proving that every common solution of the hypothesis polynomials (Le. every valid geometric configuration) alsosolves the conclusion polynomial (Le. the statement is valid for the configuration). Various computer algebra methods can be used for proving such geometry statements, such as characteristic sets or Grobner bases. The underlying computer algebra algorithms for these methods are mainly the solution of systems of polynomial equations, various decisionalgorithms in the theoryof polynomial ideals, and algorithms for computing in algebraic extensions of the field of rational numbers.

MODELING IN SCIENCE AND ENGINEERING In science and engineering, it is common to express a problem in terms of integrals or differential equations with boundary conditions. Numerical integration leads to approximations of the values of the solution functions. But, as R. W. Hamming (4.v.) has written, “the purpose of computing is insight, not numbers.” So, instead of computing tables of values, it would be much more gratifying to derive formulas for the solution functions. Computer algebra algorithms can do just that for certain classes of integration and differential equation problems.

of differential

Consider, for example,thesystem equations dq dx

+ d2P dx2

-6 - (x) - (x)- 6 sin (x)= 0

d24 dx2

6-(~)+a

2

dP -(~)-66cOs(X)=O dx

subject to the boundaryconditions p ( 0 ) = 0, q ( 0 ) = 1, p’(0) = 0, q’(0) = 1. Given this information as input, any of the major computeralgebra systems will derive the formal solution p ( x )= -

12sin(ax) 6cos(ax) 12sin(x) 6 a(a2 - 1) a2

+

a2-1

+--, a2

Some Algorithms in Computer Algebra Since computer algebra algorithms must yield exact results, these algorithms use integers and rational numbers as coefficients of algebraic expressions because these numberscanberepresented exactlyin the computer. Coefficients may also be algebraic. Addition or subtraction of integers is quite straightforward, and these operations can be performed in time linear in the length of thenumbers. Theclassical

algorithm for multiplication of integers x and y proceeds by multiplying every digitof x by every digitof y and adding the results after appropriate shifts. This clearly takes time quadratic in the length of the inputs. A faster multiplication algorithm due to A. Karatsuba and Yu. Ofman is usually called the Karatsuba algorithm. The basic idea is to cut the two inputs x, y of length 5 n into pieces of length 5 n/2 such that

x = a,O”/’

+ b,

y = cp””

+ d,

where p is the basis of the number system. A usual divide-and-conquer approach would reduce the multiplication of two integers of length n to four multiplications of integers of length n / 2 and some subsequent shifts and additions. The complexity of this algorithm would still be quadratic in n. However, from

xy = acpn

+ ( ( a + b)(c+ d ) - ac

-

bd)pnI2+ bd

we see that one of the four multiplications can be replaced by additions and shifts, which take only linear time. If this reduction of the problem is applied recursively, we get a multiplication algorithm with a time complexity proportional to n10g23. This is still not the best we can hope for. In fact, the fastest known algorithm is due to Schonhage and Strassen and its complexity is proportional to n(1og n)(loglog a).However, the overhead of this algorithm is enormous, and it pays off only if the numbers are incredibly large. Polynomial arithmetic with coefficients in a field, like the rational numbers, presents no problem.These polynomials form aEuclidean domain, so we can carry out division with quotient and remainder. Often, however, we need to work with polynomials whose coefficients lie in an integral domain like the integers. Addition, subtraction, and multiplication are again obvious, but division with quotient and remainder is not possible. Fortunately, we can replace divisionby a similar process, called pseudo-division. If a(x) = a,xm + . . . + a l x + a o and b ( x ) = b , x , + . . . + b l x + b o , with m 2 n, then there exists a unique pair of quotient q (x) andremainder r ( x ) suchthat b,“-“+l a(x) = q(x)b(x) + r ( x ) where either Y is the zero polynomial or the degree of Y is less than the degreeof b. Good algorithms are needed for computing the greatest common divisor (gcd) of polynomials. If we are working withpolynomialsover a field,we can use Euclid’s algorithm, which takes two polynomials f l ( x ) , f2(x) and computes a chain of remainders f 3 ( x ) , . . . , f k ( x ) , fk+l(x) = 0, suchthat fi is theremainder in dividing fi-2 by f i - 1 . Then f k ( x ) is the desired greatest common divisor. For polynomials overthe integers we can replace division bypseudo-division,and theEuclidean algorithm still works. The problem, however, is that, although the inputs and the final result might be

COMPUTER ALGEBRA: PRINCIPLES

quite small, the intermediate polynomials can have huge coefficients. This problem becomes even more pronounced if we deal with multivariate polynomials. As an example,consider the computation of the greatest common divisor of two bivariate polynomials

+ xy5 + x3y - xy + x4 - 2 , g(x, y) = xy5 - 2y5 + .zY4 - 2xy4 + xy2 + xy f ( x , y) = y6

with integral coefficients. Consider y to be the main variable, so that the coefficients of powers of y are polynomials in x . Euclid's algorithm yields the polynomial remainder sequence

ro = f , y1

=g,

r2 = (2x

- x2)y3+ (2.2 - x3)y2

+ (x5 - 4x4 + 3x3 + 4 2 - 4x)y + x6 - 4x5 - 3x4 + 4x3 - 4x2, r3 = (-x7 + 6x6 - 12x5 + 8x4)y2 + ( - d 3 + 12x" - 58x" + 136x" - 121x9 - 117x' + 362x7 + 192x4 - 64x3)y - x14 + 12x13 - 5 8 ~ -' 1~ 3 6 ~ "- 121x" - 116x9 + 3 5 6 ~ ' - 224x7 - 112.8 + 192x5 - 64x4, r4 =(-x2' + 2 6 - 3~ 0 8~~ +' ~2 1 8 4 -~ 1~ 0~ 1 9 8 ~ ~ ~ + 3 2 1 8 8 ~-~6~5 9 3 2 + ~ ~6 8~ 5 3 6 ~ + ~ '4 2 4 3 1 ~ ~ ' - 236x6 - 104x5

285

polynomial gcds we could always make the polynomials primitive (Le.eliminate common factors not depending on the mainvariable). This approach keeps intermediate remainders as small as possible, but at a high price: many gcd computations on the coefficients. The subresultant gcd algorithm can determine many of the commonfactors of the coefficients, without ever computing gcds of coefficients. The remainders stay reasonably small during this algorithm. In fact, in our example theinteger coefficients grow onlyto length 4. Themostefficient algorithm for computing gcds of multivariate polynomials is the modular algorithm. Thebasic idea is to apply homomorphismstothe coefficients, compute the gcds of the evaluated polynomials, and use the Chinese remainder algorithm to reconstruct the actual coefficients in the gcd. If the input polynomials are univariate, we can take homomorphisms Hp,mapping aninteger a to amodp. If the input polynomials are multivariate, we can take evaluation homomorphisms of the formHxl=rl for reducing the number of variables. In our example, we get gcd(Hx=2(f),&=2(g)) = Y

+ 2,

gcd (Hx=3(f )) Hx=3(g)) = Y

+ 3.

+

So the gcd is y x. Never during this algorithm did we have to consider large coefficients.

Decomposingpolynomials into irreducible factors is another crucial algorithm in computer algebra. A few decades ago, only rather inefficient techniques for - 2 7 4 5 3 3 ~ ' +411512x'' ~ - 149025~'~ polynomial factorization were available. Research in computer algebra has contributed to a deeper under- 4 3 1 2 0 0 ~ ' ~7 2 9 2 9 6 ~ ' ~ 3374?2x14 standing of the problem and, as a result, has created -3 1 8 3 0 4 ~ ~ 5 ~2 3 2 6 4 ~ ' ~2 2 5 2 8 0 ~ " much better algorithms. Let us first consider univariate polynomials with integer coefficients.Since the - 7 8 8 4 8 ~ " 1 2 6 7 2 0 ~-~5 3 2 4 8 ~ '+ 8192x7)y problem of coefficient growthappears again, one - xZ9 26x2' - 3O8xz7 2 1 8 4 ~ '-~ 1 0 1 9 8 ~ ~ ~ usually mapsthe polynomial f ( x ) toa polynomial ) applying a homomorphismHp,p a prime. f ( p ) +32188xZ4 - 6 5 9 3 2 ~ ' ~ 6 8 5 3 6 ~ ~4 ~2 4 3 1 ~ ~ ~ f ( p ) ( xby can now be factored by the Berlekamp algorithm, - 2 7 4 5 3 3 ~ ~+' 4 1 1 5 1 2 ~ " - 1 4 9 0 2 5 ~ " which involves some linear algebra and computations - 4 3 1 2 0 0 ~ ' ~7 2 9 2 9 6 ~' ~3 3 7 4 7 2 ~ ' ~ of gcds. Conceivably, we couldfactor fmodulo various primes p1, . . . , Pk and try to reconstruct the factors - 3 1 8 3 0 4 ~ '~ 523264~' ~2 2 5 2 8 0 ~ ' ~ over the integers by the Chinese remainder algorithm, - 7 8 8 4 8 ~ " 1 2 6 7 2 0 ~ "- 5 3 2 4 8 ~ +~8192~'. as we did in the modular gcd 'algorithm. The problem is that we do not know which factors correspond. So The greatest common divisor of f and g is obtained by instead, one uses a p-adic approach based on Hensel's eliminating common factors p(x) in r4, The final result lemma, whichstates that afactorization off modulo a is y x. Although the inputs and the output aresmall, prime p can be liftedto afactorization o f f modulo pk, the intermediate expressions get very big, The biggest for any positive integer k . Since we know the bounds polynomial in this computation happens to occur in for the size of the coefficients that can occurin the facthe pseudo-division of r3 by r4. The intermediate polytors, we can determine asuitable k and thusconstruct nomial has degree 70 in x . the correct coefficients of the integral factors. There This problem of coefficient growth is ubiquitous in is, however, an additional twist. If f ( x ) can be decomcomputer algebra, and there are some general posed into irreducible factors fl(x), f2(x) over the inteapproaches for dealing with it. In the special case of gers, it could wellbe that, modulo p, these irreducible

+ + +

+

+

+

+

+

+

+

286

COMPUTER ALGEBRA: PRINCIPLES

factors canbe spliteven further. So after wehave lifted the factorization modulo p toa factorization modulo p k for a suitable k, we need to try combinations of factors for determining the factors over the integers. For instance, x4 1 is a polynomial that is irreducible over the integers, but factors modulo every prime. Theoretically, this final step is the most costly one, and it makes the time complexity of the BerlekampHensel algorithm exponential in the degree of the input. Nevertheless, in practice the algorithm works very well for most examples.

+

In 1982, Lenstra, Lenstra, and Lovaszdeveloped an algorithm for factoring univariate polynomialsover the integers witha polynomial time complexity. Kaltofen extended this result to multivariate polynomials.The overhead of this algorithm, however, is extremely high.

To integrate a rational function A ( x ) / B ( x ) ,where A, B are polynomials with integral coefficients, we could split the polynomial B into linear factors in a suitable algebraic extension field, compute a partial fraction decomposition of the integrand, and integrate all the summands in this decomposition. The summands with linear denominators lead to logarithmic parts in the integral. Computations in the splittingfield of a polynomial are very extensive; if n is the degree of the polynomial, the necessary algebraic extension has degree M!. So the question arises as to whether it is reallynecessary to go to the full splitting field.For instance, for x 2 fi

where the c1, . . . , c, are the distinct roots of the resultant of A(x) - &'(x) and B(x) w.r.t. x, and each vi is the gcd of A ( x ) - @'(x> and B(x). In this way we get the smallest field extension necessary for expressing the integral. The problem of integration becomesmore complicated if the class of integrands is extended. A very common class is that of elementary functions. We get this class by starting with the rational functions and successively adding exponentials (exp f ( x ) ) , logarithms (logf(x)), or roots of algebraic equations, where the exponents, arguments, or coefficients are previously constructed elementary functions. Not every elementary integrand has an elementary integral (e.g. ex2dx cannotbeexpressed as an elementary function). However, there is an algorithm (the Risch algorithm) thatcan decide whethera given integrand canbe integrated in terms of elementary functions, and if so the Risch algorithm yields the integral. The case of algebraic functions is the most complicated part of the Risch algorithms. The discrete analog of the integration problem is the problem of summation in finite terms. We are given an expression for a summand a,, and we want to compute a closed expression for the partial sums of the infinite series Cy=la,. That is, we want to compute a function S( m), such that

n= 1

For instance, we want to compute m

nxn = n= 1

mxmi2- (m

+ l)xm+' + x

( x - 1)2

For the case of hypergeometric functions, Gosper's aZgorithm solves this problem. There is also a theory of summation similar to the theory of integration in finite terms. The example shows that althoughwe had to compute in the splitting field of the denominator, the algebraic extensions actually disappear in theend. A deeper analysis of the problem reveals that, instead of factoring the denominator into linear factors, it suffices to compute a so-called square-free factorization-Le. a decomposition of a polynomial f into f = f l f ; f :, where the factors f i are painvise relatively prime and have no multiple roots (square-free).The square-free factorization can be computedby successive gcdoperations. Now if A and B are relatively prime polynomials over the rational numbers, B is square-free, and the degree of A is less than the degree of B, then

-

8

Grobner bases are an extremely powerful methodfor deciding many problems in the theory of polynomial ideals. As an example,consider the system of algebraic equations 2x4

+ y4 + 8x3 - 3x2y + 2y3 + 12x2 - 6xy + y2 t 8 ~ 3- y f 2 = 0 8 ~ ~ + 2 4 ~ ~ - 6 ~ ~ + 2 4 ~ - 6 ~ + 8 = 0 4y3 - 3x2 - 6y2 - 6~

+ 2y - 3 = 0

(1)

Every root of these equations isalso a root of any linear combination of these equations, so in fact we are looking for zeros of the ideal generated by the lefthand sides inthe ring of polynomials inx and y over Q. The left-hand sides form a specific basis for this same

COMPUTER ALGEBRA: SYSTEMS

ideal that is better suited for solving the system. Such a basis is a Grobner basis with respect to a lexicographic ordering of the variables. In our example, we get the followingGrobner basis, which we again write as a system of equations. y3 - y

2

yx+y=o 3x2

+ 2y2 + 6x - 2y + 3 = 0.

(2)

The solutions of (1) and (2) are the same, but obviously it is much easier to investigate the solutions of (2). The system contains a polynomialdependingonlyon y, and the zeros are y = 0 and y = 1. Substituting these values for y into the other two equations, we get the solutions (x = -1, y = 0) and (x = -1, y = 1) for the system of algebraic equations. Other problems in the theory of polynomial ideals that can be solvedby Grobner bases include the ideal membership problem, the radical membershipproblem, the primary decomposition of an ideal, or the computation of the dimension of an ideal. Most computer algebra programs contain a Grobner basis package.

Representation of Expressions Dynamic data structures are necessary for representing the computational objects of computer algebra in the memory of the computer.For instance, during the execution of the Euclidean algorithm, the coefficients in the polynomials expand and shrink again. Since the goal of the computation is an exact result, we cannot just truncate them to the most significant positions. Most computer algebra programs represent objects as lists. An integer is represented as a list of digits. For more complicated objects, the choice of representation is not that clear. So, for instance, we can represent a bivariate polynomial recursively as a polynomial in a main variable withcoefficientsin a univariate polynomial ring, or distributively as pairs ofcoefficients and power products in the variables. For example: Recursive representation: p ( x , y) = (3x2 - 2x

For the dense distributive representation of p , we order the power products according to their degree and lexicographically within the same degree. So p is represented as 3 0 0 0 - 21 0 1 - 3 0 0 2 1 x2y2 x3y x4 y3 xy2 x2y x3 y2 xy x2 y x 1

(

=o

287

)

If only a fewpower products have a coefficient different from 0, then a dense representation wastes a lot of space. In this case we really want to represent the polynomial sparsely (i.e. by pairs of coefficients and exponents). The sparse recursive representation of p is ((((3 2)(-2

1 x 1 0 ) ) 2)(((1 2)(-3

1))

1 x 1 0 ) )0 ) )

and the sparse distributive representation of p is ((3(22))(-2(12))(1(21))(1(02))(-3(11)) (2(10))(1(0

0)).

For different algorithms, different representations of the objects are useful or even necessary. The multivariate gcdalgorithm works best withpolynomials given inrecursive representation, whereas the Grobner basis algorithm needs the input in distributive representation. So, in general, a computer algebra program has to provide many different representations for the various algebraic objects andtransformations that convert one form to another. Bibliography 1985-present. Journal of Symbolic Computation. London: Academic Press. 1988. Davenport, J. H., Siret, Y., and Tournier, E. Computer Algebra-Systems and Algovitkms for Algebraic Computation. London: Academic Press. 1989. Akritas, A. G.Elements of Computer Algebra. New York: John Wiley. 1996. Winkler, F. Polynomial Algorithms in Computer Algebra. New York: Springer-Verlag. 1997. Bronstein, M. Symbolic Integration I-Transcendental Functions. Berlin: Springer-Verlag.

Franz Winkler

SYSTEMS

+ l)y2 + (x2 - 3x)y + (2x + 1)

Distributive representation: p ( x , y) = 3x2y2 - 2xy2 + x2y + y2 - 3xy + 2x + 1

For both these representations, we can use a dense or a sparse list representation. In the dense representation, a polynomial isa list of coefficients, starting from some highest coefficient down to the constant coefficient. So the dense recursive representation of p is ((3-21)(1-30)(21))

The goal of a symbolic computation system is to provide to the large and diverse community of “mathematics users” facilities for general mathematical calculations, typically including facilities such as arithmetic with exact fractions, polynomial and rational function arithmetic, factorization of integers and of polynomials, exact solution of linear and polynomialsystems of equations, closed forms for summations, simplification

288

COMPUTER ALGEBRA: SYSTEMS

of mathematical expressions, and differentiation and integration of elementary functions. Most systems also allow users to define and use their own facilities. Loosely speaking, computer algebra systems can be classified as specialpurpose or generalpurpose. Special-purpose computer algebra systems are designed to solve problems in one specific area of mathematics, for example celestial mechanics, general relativity, orgroup theory. Special-purpose computer algebra systems use special notations and special data structures (q.v.), and have most of their essential algorithms implemented in the kernel of the system. Such systems will normally excel in their respective areas, but are of limited use in other applications, for example Magma (formerly Cayley)for group theory and algebraic geometry, GAP for discrete algebra and group theory, Macaulay 2 for algebraic geometry and commutative algebra, or Pari for number-theoretic computations. We will restrict our attention to generalpurpose computer algebra systems. A general-purpose computer algebra system is designed to cover many diverse application areas and has sufficiently rich data structures, data types, and functions to do so.

In recent years, parallel computer algebra has attracted much attention and several experimental systems have grown out of these efforts. We will give an overview of a selection of these parallel systems.

COMPUTATION All computer algebra systems have the ability to do mathematical computations with unassignedvariables. For example

> t :=xA2

i;

sin(x); t :=x2

sin(x)

> diff (t,x); 2 x sin(x) + x Z cos(x)

computes the derivative of an expression, where x is an unassigned variable. Computer algebra systems have the ability to perform exact computation, i.e. arbitrary precision rational arithmetic, algebraic arithmetic, finite field arithmetic, etc. For example

rather than 0.8333. . . and 0.07106. . . . Of course an arbitrarily precise numerical approximation can be computed on demand. “Computation” in a computer algebra system requires a much more careful definition than in other languages. For example, comparethe Pascal (q.v.) statement x := a/b with the Maple statement f :=int ( e x p r , x) ;.

Thedivision a/b, provided b # 0, will produce a result (a floating-point number) of a predictable size and in a predictable time. In contrast the statement int ( e x p r , x) may: 1. return the integral of e x p r with respect to x (the size of the result is difficult to predict)

2. return a partial answer (Le. f ( x )

+ J g ( x ) dx)

3, fail to compute because a closed-form integral does not exist (e.g. ex3does not have such an integral) 4. fail to compute because the algorithms used cannot find an integral, even though one exists

5 . produce a result which is too large to represent even though it is computable, e.g.

6. require a very long time to compute, making its computation not feasible for practical purposes.

The sizeof the result generated by the i n t statement is not predictable. This impliesthe use of dynamic memory management (e.g. garbage collection-q.v.) by computer algebra systems. Thisis one of the main reasons why Lisp ( 4 . v . ) was used in early systemsas an implementation language.

CORRECTNESS To some extent it is surprising that there should be any incorrectness tolerated by a supposedly mathematical symbol manipulation system, and not all are convinced that this is really necessary. However, in most current symbolic computation systems, simplifications such as ( x y ) / ( x y ) 1 are performed automatically, without keeping track that x must not be equal to -y for this to make sense. This is an example where a compromise is made, since the system may sometimes make a mistake in order to have efficient simplification that is almost always correct. (Note that we are not talking about program “bugs” but rather design decisions.) Another example is the automatic simplification of 0 x f ( 1000) + 0 before evaluation of f(lOO0). Thissimplificationwould be “obviously desirable” except when f (1000) may be undefined, or infinity. Performing the simplification is an efficiency that is “slightly” incorrect, while always evaluating f ( 1000), if its value is not known beforehand, is something which most users would choose to avoid. Thus we see that many systemstake the point of view that users will tolerate some degree of deviation from rigorous correctness. The user should be aware that all systems will perform some simplifications that are not safe 100% of the time.

+

+

--f

COMPUTER ALGEBRA SYSTEMS

The Systems In this section we describe some of the most relevant systems in more detail. We restrict our attention to either new systems or systems which are widely used. For older systemssuch as Camal, Formac, or SAC-1/11, see the second edition of this Encyclopedia.

All systems wedescribe are interactive general-purpose computer algebra systems that provide the following three key capabilities:

+

+

+

Symbolic computations: all systemsprovide routines for expansion and factoring of polynomials, differentiation and integration (definite and indefinite), series computation, solving equations and systems of equations, and linear algebra. Numeric computations: all systems support arbitrary precision numerical computation including computation of definite integrals, numerical solutions of equations, and evaluation of elementary and special functions. Graphics: all systems except Reduce allow plotting of two- and three-dimensional graphics.

Additionally, each system has a programming language which allows the user to extend the system. For other comparisons of computer algebra systems see Wester (1994).

MACSYMA TheMacsyma(Macsyma Inc., 1995) project was founded by William Martin and Joel Moses of MIT. Macsyma was built upon a predecessor MIT project, Mathlab 68, an interactive general-purpose system, that was the development tool and test bed for several MIT doctoral theses in algebraic manipulation and algorithms.

289

Macsyma users can translate programs into Lisp. This allows interpretation by the Lisp interpreter (instead of the Macsyma language interpreter, which itselfis coded in Lisp). TheLisp compiler can be applied to the translation to take the further step of compiIing the program into machine code.

A user wishing to make any extensions to the functionality of the system (e.g. installing a new kind of mathematical object, but wanting addition or multiplication to work for it) must learn Lisp in order allow its manipulation to proceed as efficiently as the rest of the built-in mathematical code. However, the language allows a large amount of extensibility without use or knowledge of the Lisp internals. For example, the parser/grammar of the Macsyma language can be altered on-the-fly to include new prefix, infix, or “matchfix” operations defined by user-supplied Macsyma programs. Another feature of Macsyma is its assume facility. It allows one to define properties over the symbols. For example, one can define ASSUME ( A > B ) and then the system knows this relation between A and B. If the user then asks MAX (A,B ) the answer A is given. Macsyma makes extensive useof flags fordirecting the computation. For example, if the flag T R I G E X P A N D is set to TRUE, Macsyma will cause full expansion of sines and cosines of sums of angles and of multiple angles occurring in all expressions. There alsoexist nonbinaryflags, such as LHOSPITALLIM, which is the maximum number of times L’Hospital’srule is used in a limit computation.

REDUCE

The Macsyma systeminternals were first implemented inMaclisp, a systems programming dialect of Lisp (4.v.) developed at MIT. For many years Macsyma was available through the Arpanet on a DEC PDP-10 running the ITS system. However, in the late 1970s and early 1980s, the important features of Maclisp were recreated in Franz Lisp running in the Unix environment. PC and Unix versions of Macsyma are now distributed by Macsyma Inc.

Reduce (Hearn, 1995) was originally written in Lisp to assist symbolic computation in high energy physics in the late 1960s. Its user base grew beyond the particle physics community as its general-purpose facilities were found to be useful in many other mathematical situations. Reduce 2 was ported to several different machines and operating systems during the 1970s, making it the most widely distributed system of that time, and one of the first efforts in Lisp portability. Reduce 3, written in the “Standard Lisp” dialect, is a further refinement and enhancement. Itconsists of about 4 MB of Lisp code. We discuss belowsome of the features of Reduce 3, as found in its User’sManual.

The Macsyma kernel is currently written in Common Lisp. The external math libraries are written in Common Lisp or the Macsyma language. Macsyma is a typical algebraic manipulation language. It provides a Fortran/Algol-like notation for mathematical expressions and programming. Automatic type checking is almost nonexistent.

Reduce, likeMacsyma, has a simplesyntaxfor the basic mathematical commands (expression evaluation, differentiation, integration, etc.), and a Fortran-like programming language. Its reservedwordsinclude not only the keywords of the programming language but also the names of the various flagswhich control default simplification and evaluation modes. For

290

COMPUTERALGEBRA: SYSTEMS

example, the reserved word EXP is a flag which, when turned OFF, blocks the default expansion of rational function expressions. Reduce has two programming modes (an “algebraic” mode and a “symbolic”mode), with the same syntactic forms for procedure definition, assignment, and control statements in both modes. In algebraic mode, data objects are manipulated through mathematical operations, such as numerical or rational function arithmetic, FACTORIZE (factor a polynomial over the integers), DF (partial differentiation), SUB (substitute an expressionfor a variable in another expression), or COEFF (find the coefficients of various powers of a variable in an expression). In symbolic mode, one can directly manipulate the internal representation of mathematical expressions, usingLisp-like manipulation primitives such as CAR and CDR,and routines such as LC (find the leading coefficient of a polynomial) or + (add a term to a polynomial). Most casual users of Reduce need to learn only the functionality provided by algebraic mode (programming-in-the-abstract), but since most of the basic system is coded in symbolic mode, programming in symbolic mode is sometimes necessary to augment or borrow from those basic facilities.

.

Numerical programs often have to be written based on a set of formulas which describe the solution of a problem in science or engineering. For that step, Reduce provides GENTRAN, an automatic code generator and translator. It constructs complete numerical programs based on sets of algorithmic specifications and symbolic expressions. Formatted Fortran, RATFOR, or C code can be generated through a series of interactive commands or under thecontrol of a template processing routine. This facility is available in Macsymatoo.

DERIVE Derive (Richet al., 1994)was developed by A. Rich and D. Stoutmeyer and is marketed by Soft Warehouse Inc. It is also implemented in Lisp. Derive will run on any IBMPC compatible and does not require a mathematics coprocessor. Derive is the successor to pMath and is menu-driven. Many commands and operations can be carried out with just two orthree keystrokes. In addition to pMath, it has a powerful graphics package that can plot functions in two and three dimensions. One can plot more than onefunction on thesame graph and use multiple windows for easy comparisons. Derive supports all basic symbolic mathematics, such as factorization, integration, and differentiation. It also understands matrices and vectors and can do basic vector calculus. Although Derive is less capable than other

general-purpose computer algebra systems, the extent of its power based on such minimal hardware is remarkable. Nonetheless, it lacks, for example, procedures forsolvingsystems of nonlinear equations, computation of eigenvectors of matrices, and special features such asLaplace transforms, Fourier transforms, and Bessel functions. The programming language of Derive provides only the definition of simple functions, which may be recursive, using an IF function and an ITERATE function as control structures ( 4 . v . ) . All utilityfiles are programmed in this language.

MATHEMATICA The development of Mathematica (Wolfram, 1996) was started by S. Wolfram in 1986. The first versionof the system was released by Wolfram Research, Inc. in 1988. Wolfram had previouslydeveloped the SMP computer algebra system in 1979-198 1, which served as a forerunner of some elements of Mathematica. Mathematica was designed to be a computer algebra system with graphics, numerical computation, and a flexible programming language. In Mathematica, patterns are used to match classes of expressions with a given structure. Pattern matching and transformation/rewrite rules greatly simplify the programming of mathematical functions because one need only define replacements for patterns. For example, consider the definition of the logarithm of a product or a power: In[ll : = log[x-y-] : = log[x] + log[yl In[21 := log[x-Ay-l := y log[xl.

These definitionsare global rules. Such a rule is applied to all expressions automatically if the left-hand side of the rule matches the expression, i.e. the heads are equal and the arguments match. Thisisin contrast to rewrite rules, which are applied on demand. The notation x- denotes a pattern that matches anything and is referred to as x in the right-hand side of the rule. The structure of such patterns can be very complex. For example, thepattern x:-An-Integer?Positive matches any expression of the form ab, where a is any expression and b a positive integer. The exponent b is then referred to as n, and the complete object is referred to as x. Writing a definitionin terms of rules forspecific patterns can obviate the need for extensive checking of argument values within the body of a definition. Pattern matching is structural, not mathematical, so b 2 is not recognized as the product b X b. However, the pattern matcher recognizes that multiplication is associative: A

In[3] := f [log[2abA2]I Out [ 3 l = f [log[21 + log[al + 2 log[bl I

COMPUTER ALGEBRA: SYSTEMS

Mathematica’s colorful plotting features are very good. It provides two- and three-dimensional graphs, along with flexibility to rotate and change the viewpoint easily. The plot in Fig. 1 was generated with the command Plot3D[Sin[xyl, {x,O, 3}, {y,O, 31, PlotPoints-> 31,Boxed->False]

Mathematica’s kernel consists of about 1.1 million lines of C (4.v.) code and there are about 800,000 lines of Mathematica code in the distributed packages. The basic functionalityof Mathematica is built into the kernel or coded inthe Mathematica language in “start-up” packages. A wide variety of applications such as statistics and Laplace transforms are coded in Mathematica in “standard” packages that can be read in on request. Mathematica presents itself using a notebooktype graphical user interface. The user can mix text, animated graphics, and Mathematica input. This is an excellent tool for use in education and presentation of results.

1 0.

-0

Figure 1.

Mathematica plot of the function sin(xy).

291

MAPLE TheMaple (Waterloo Maple Inc., 1996) project was started by K. Geddes and G. Gonnet at the University of Waterloo in November 1980. At present, Maple is distributed by Waterloo Maple Inc. It followed from the construction of an experimental system (named “wama”)which proved the feasibility of writing a symbolic computation system in system implementation languages and running it in a crowded time-sharing environment. Maple was designed and implemented to be a pleasant programming language, as well asbeing compact, efficient, portable, and extensible. The Maplelanguage is reminiscent of Algol 68 (4.v.)without declarations, but also includes several functional programming (4.v.)paradigms. The internal mathematical libraries are written in the same language provided to users. Maple’s kernel interprets this language relatively efficiently. Most higher level functions or packages, about 95% of the functionality (e.g. integration, solving

292

COMPUTER ALGEBRA SYSTEMS

equations, normalization of expressions, radical simplification, and factorization), are all coded inthe user language. Primitive functions, like arithmetic, basic simplification,polynomialdivision, manipulations of structures, series arithmetic, and integer gcds, are coded in the kernel. In principle, the user should not notice the difference between using internal or external functions. The kernel is implemented inC and consists of about 45,000 lines of code. The implementation of the external functions uses about 1.2 millionlines of Maple code. Maple supports a large collection of specialized data structures: integers, rationals, floating-point numbers, expression trees, series, equations, sets, ranges, lists, arrays, tables, etc. All of these are objects which can be easily type-tested, assembled, or disassembled. Major emphasis has been placedon readability, natural syntax, orthogonality, portability, compactness,and efficiency. Maple currently is used not only for symbolic computation, but also as a tool for teaching diverse courses (e.g. algebra and calculus, numerical analysis, economics, and mechanicalengineering). Maplemakesextensiveuse of hash tables forvarious purposes (see SEARCHING). In particular, hashing (or signatures of expressions) is used to keep a single occurrence of any expression or subexpression in the

Figure 2.

Maple’s surface plot of (x2 - y 2 ) / ( x 2 + y 2 ) for x , y E -1 ..1

system. This meansthat testingforequalityis extremely inexpensive: itcosts only one machine instruction. Tables, arrays, and the “partial computation table” are implemented internally as hash tables and hence are alsoveryefficient.Themotivationfor remembering results lies in the observation that subexpressions may appear repeatedly in some computations. For example, computing the third derivative of esin(x)will compute the first derivative of sin(x) and cos(x) many times. Packages in Mapleare collections of functions suitable for a special area, like linalg for linear algebra or numt ohe r y for number theory. Functions from such packages could be called with the command “ p a c k agename [ function].” To avoid using these long names, onecould setup short names for each function; so for example, after the command with(lina1g) onecan use det ( A ) instead of linalg[detl ( A ) , Naming conflicts are always reported to the user. Maple V incorporates a new user interface for the X Window system that includes 3D plotting and separate help windows, allows editingof input expressions, and maintains a log of a Maple session. In Fig. 2 we see a plot generated by the command p l o t 3 d ( ( ~ ~ 2 - ~ ~ 2 ) ( ( ~ x~z - 12 . +. l~, ~ 2 ) ,

y=-l..l,grid=[31,31I,axes=FRAME);

COMPUTERALGEBRA: SYSTEMS

AXIOM

Axiom (Jenks and Sutor, 1992) is a system developedat the IBM Thomas J. Watson Research Center and presently distributed by the Numerical Algorithms Group (NAG). Axiom is implemented in Lisp and runs on all major Unix ( q . v . ) platforms as well as on PCs. Axiomis both a language for casual algebraic computing and an object-oriented programming language complete with abstract data types (q.v.), and informationhiding ( 4 . v . ) designed to allow description and implementation of mathematical code at a high level. Axiom also includes a compiler that can be used to extend the system with user-defined functions and data types. Every Axiom object has an associated data type that determines the operations that are applicable to the object. Axiom has a set of over 300 different data types, of which some pertain to algebraic computational objectswhile othersaredatastructures. Some are simple, like Integer, RationalNumber and Float, and some are parameterized, like Complex (e.g. c z for complex integers) and Univar iatepolynomial [e.g. UP ( x ,Q) for univariate polynomials in x over the rational numbers However, the user may have to supply type declarations. The interpreter can usually determine a suitable type for an object, but not always. The followingdialogue demonstrates this type assignment by theinterpreter. The type is always printed on a line following the object itself.

(e)].

1/2

+

1/6

(1)

+

1/12

3 -

4

Type: Fraction Integer (5

+ %i)**3

(2) 110 + 74%i Type: Complex Integer

The portion of the Axiom language intended for programming computational procedures has several features novel to algebraic manipulation languages, although some reflect concepts developedin other languages in the past ten years-in particular parameterized abstract data types, modules, and inheritance. The unique abstract data type design of Axiom isbased on the notion of categories.

Categories lay out classes (q.v.) and hierarchies of types. Defining a category means defining its relation in the existing hierarchy, extra parameters (the category Vector F needs ie Id a parameter from the category Field to describe its scalars), operations which must be supported by its members, which are called domains, and properties the operations must satisfy. For example, consider the category OrderedSet:

293

OrderedSetO: C a t e g o r y = = S e t C a t e g o r y w i t h -- operations " < " : ( $ , $ ) -> Boolean max: ( $ , $ I -> $ min: ( $ , $ ) -> $ -- attributes irreflexive " < " -- not ( x < ~ ) transitive " < " -- x < y a n d y < z = > x y ; x ) min(x,y) == ( x < y = > x ;y)

This definition gives a category which extends (inherits) the category set Cat r yego by requiring three additional operations and three properties. If some operations are expressible by others, the implementation of these may also be put in the definition, like max and min in the above example. Examples of categories in Axiom are algebraic concepts such as Group, AbelianGroup, Ring, EuclideanDomain, Field, and UnivariatePolynomialCategory(R: Ring).

Domains are instances of categories; this means that they define an actual data representation and provide functions implementing the operations of a category in accordance with the stated attributes. Domains can be parameterized too; for example, Spar seunivar iate Polynomial (R) takes one parameter R, the coefficient ring, which must be of type Ring (a category), and the special representation used is a linked list of coefficients. This concept also allows usto implement an algorithm only once for a given category and to use it for any values for which it makes sense. For example, the Euclidean algorithm can be used for values belonging to any domain which is a Euclidean domain (a category!). The following package takes a Euclidean domain as a type parameter and exports the operation gcd on that type. GCDpackage(R: EuclideanDomain): with gcd: ( R , R) -> R == add gcd(x,y) == x :=unitNormal(x).canonical y :=unitNormal(y).canonical while yA = O repeat (x,y) : = ( y , x r e m y ) y:=unitNormal(y).canonical X

This gcd operation is now polymorphic and may be usedformany types, e.g. z, SUP ( Q), or psup (Integer Mod 11).It could also be put in the definition of the category EuclideanDomain.

294

COMPUTER ALGEBRA:

SYSTEMS

MuPAD The MuPAD (Fuchssteiner ef al., 1996) project was initiated by Benno Fuchssteiner at the University of Paderborn, Germany. It started in 1989 as a collection of master’s theses done by students atthe University of Paderborn. MuPAD was the first system to be available free of charge until 1998, when marketing and user interface development was turned over to a commercial company, Sciface Software GmbH. MuPAD features a programming language and a set of datastructuresthatare similar to Maple.One important enhancement that MuPAD introduces is the concept of domains that encapsulate code into packages and introduce types in an otherwise untyped environment, though at theexpense of some efficiency loss. Thefollowing is a simple example of creating the domain of integers modulo 3, creating elements of the corresponding type and doing basic arithmetic on them:

> > 23 :=Dom::IntegerMod(3); Dom::IntegerMod(3)

>> a:= 23(10); b :=23(2); 1 mod 3 2 mod 3 3

MuPAD also lets the user query domain properties, such as verifying that the integers modulo 3 form a field:

>> 23 :=Dom::IntegerMod(3); Dom::IntegerMod(3)

>> 23::hasProp( Cat:: Field

.

x = i or x = - i

Figure 3. TI-92 screenshot.

Computer algebra typically involves large data structures and algorithms often require huge amounts of intermediate storage. For classroom problems arising in high school, however, it performs surprisingly well. The computer algebra software built into the TI-92 was written by the creators of Derive. Unlike Derive, the TI-92 isnot menu driven, but features a command-line interface similar to systems like Maple and Mathematica. Commands are, however, also available through pull-down menus or special keys.

Parallel Computer Algebra Systems

>> a+2*b; 2 mod

.

);

TRUE

Like Maple, MuPADis extensible by user functions written in its own programming language. MuPAD also allows extensions of its kernel (written in C++, 4.v.) by dynamically loadable modules. Being a younger system, MuPAD does not yet have all the functionality and algorithms of its competitors. Nevertheless, it has found its community. It also builds on available free software; for example, it uses the Pari package for its arbitrary precision arithmetic.

TI-92 The Texas Instruments TI-92 hand-held calculator is an attempt to bring computer algebra to the mainstream education market. The first version was available in 1996 and featured a fairly complete computer algebra library, a geometry engine, and two- and three-dimensional plotting capabilities. Because of its small physical size, the calculator struggles with memory restrictions (68K RAM, upgradable to 288K) and screen space (128 x 240pixels; Fig 3).

There are numerous applications of computer algebra: however, the algorithms involved are complex and often require lots of memory and processing time. It is therefore natural to try to exploit the parallelism contained in manyalgorithms in computer algebra, and as a consequence many experimental parallelsystems have emerged. Most are based on existing (sequential) computer algebra systems. Watt, Siegl(IIMAPLEII), Char (Sugarbush), Diaz et al. (DSC), and Bernardin usedsimilar approaches to combine several Maple kernels to form a parallel system. Melenk and Neun did the same using Reduceand a parallel Lisp implementation. Kuchlin’s PARSAC-2is based on the sequential SAC-2 library. Gautier and Roch designed their PAC++ as a parallel computer algebra system from the ground up, with emphasis on load balancing. MuPAD has some language constructs forparallel computing (parallel for-loops), yet, as of 1999, no parallel implementation of MuPAD was available.

Examples In this section we present some examples which define the boundaries of what computer algebra systems can and cannot do. The successful results we call “remarkable,” areso in the sense that it is surprising that computer algebra systems can obtain them. In the future, systems might evolve that will be able to solve the ones we now call “difficult/impossible.”

COMPUTER ALGEBRA: SYSTEMS

+

The output that follows inthe remainder of this article approximates its appearance on a bit-mapped screen; the output on an ASCII terminal would be somewhat cruder.

DIFFICULT/~MPOSSIBLE PROBLEMS For which. values of n is J” dx/(x”d=) grable in closed form?

+

+

+

inte-

295

Let H be a Banach space. . . (computer algebra systems cannot handle such abstract concepts at present).

+

Test, whether the expression C(3/2) 4;7c(-1/2) is equal to zero or not (yes, indeed, it is equal to 0). Solve the nonlinear differential equation y”(x)= 0.

2y2(x)/x

REMARKABLE SOLUTIONS

+

Series expansion of

R ( s )=

s

0

+

ln(1 + st) dt l+t2

We define {xn} to be the sequence of iterated sines, i.e. xn-1 = sin(x,). What is the asymptotic expansion of xn for n + co?

where -C is a constant which depends on the value of xo.

+ J” tan(arctan(x)/3) dx in terms of tangents and arctangents: integrate( tan(atan(x)/3), x ) 810g(3tac(

---;---i2---;---) * +

atan(x)

-

1)

- 3tan(

atan(x)

---;---)

18x tan( atan(x) + 16

(1) 18 Type: Union(ExpressionInteger,ListExpressionInteger)

Comparative Examples In this section we present a few examples of some operations commonly done in symbolic computation, The examples are presented with all the declarations or environment settings which are necessary to perform the operations. We have used the following environments: Maple V R5 Mathernatica 3.0 Reduce3.6 Macsyma 420.0 Axiom 2.1 Derive 3.06 MuPAD 1.4 TI-92

Sun Sparcstation Sun Sparcstation Sun Sparcstation Sun Sparcstation Sun Sparcstation 386-based DOS system Sun Sparcstation

For each product we have used a version available in 1998. In the Derive examples menu options are denoted by square brackets. Thus, [ A ] u t h o r means the menu option A (Author).

296

COMPUTER ALGEBRA: SYSTEMS

LIMIT COMPUTATION The answer of limx-,m(e Maple

+ 1)”/eX is obviously 03, but L’Hospital’srule will fail to compute the limit: > limit( (exp(l)+l)* (x*2)/exp(l)*x, x=infinity); infinity

Mathematica

In[ll:=Limit ~ ( E + l ) * ( x A 2 ) / E A x , x - I n f i n i t y 1 out [l]:= w Note: To obtain this result the Mathematica Calculus/Limit

package .m must be used.

Reduce

returns the limit unevaluated,

Macsyma

(cl) limit( (%e+l)* (x*2)/%e*x, x , inf); (dl) infinity

Axiom

(1) -> ; limit (exp(x**2*log(l+ exp 1)) / exp x, x = %plus~nfinity) (1) + infinity Type:Union(GrderedCompletionExpressionInteger,

...)

Derive

[Aluthor (-e+ 1)* (xA2)/-e*x [C]a~cu~us[L]imit inf > limit( (E+1)* (xA2)/EAx,x=infinity); infinity

TI-92

SERIES

returns undefined

EXPANSION

The series for tan(sin(x)) and sin(tan(x)>agree through three terms. Compute the series expansion of the difference up to order 13: Maple

>series(sin(tan(x))-tan(sin(x)),x,l4);

7392 75600

Mathematica

In[ll:=Series[Sin[Tan~x]]-Tan[Sin[xl],{x,0,13}] -X’

Out[ll= 756 30

Reduce

29 ~

X’

-

1913

X”

--

75600

95

+

O[x114

7392

756

75600

7392

(c~)taylor(sin(tan(x))-tan(sin(x)),x,O,l3~;

(dl)/T/

Axiom

-

1: load taylor; 2:taylor(sin(tan(x))-tan(sin(x)),x,0,13);

30

Macsyma

756

x7

29 x’

1913 x” 95

30

756

75600

-----

(~)->series(sin(tanx)

--

-tan(sinx),x=0,13)

7392

+ . . .

COMPUTER ALGEBRA: SYSTEMS

Derive

[Aluthor sin tan x - t a n sin x [Clalculus[Tlaylor 13 [SI implify 95 x13

1913 x"

29 x '

75600 7392

756

x7 30

MuPAD

x7

29 x '

1913

95

XI'

--

30 75600 156 TI-92

+ 0(~14)

7392

ran out of memory

LINEAR ODE Solve the second-order differential equation

y"(x)

+ 2 y ' ( x ) + y(x) = cos ( x )

Maple

Reduce

1: load odesolve; 2:

odesolve(df (y(x),x,x)+2*df(y(x),x)+y(x)=cos(x),y(x),x); 2*arbconst(2)*x + 2*arbconst(l) + ex*sin(x) 1

{y(x) = 2*ex Macsyma

(cl) d e q : ' d i f f ( y , x , 2 ) + 2 * ' d i f f ( y , x ) t y = c o s ( x ) ; (c2) ode(deq,y,x); sin(x) (d2) Y=+ (%k2 x t %kl) %e-X

Axiom

(1) ->y

2

operator y ; (2) ->deq := differentiate(yx, x, 2 ) + 2 *differentiate(yx, x) +yx=cosx; (3) ->solve(deq, y, x) :=

sin(x) (3) [particular= -, basis= [%e-', x%e-"I] 2 Type:Union(Record(particular:ExpressionInteger, basis:ListExpressionInteger), Derive

[Tlransfer[Lload[UltilityODE2

[Aluthor LIN2-POS(2,l,cos(x),x) [SI implify SIN(x) e-x

(c2 x + cl) +

~

2 MuPAD

>> dgl

:= ~ " ( x ) + ~ * ~ ' ( x ) + ~ ( x ) = c o s ( x )

sin(x)

+ C1 exp(-x) TI-92

differential equations docannot

t

C2 x exp(-x)

...)

297

298

COMPUTER ALGEBRA: SYSTEMS

SYMBOLIC INTEGRATION Integrate ( x

+ 1 ) / ( x 2 + x + 1).

1/2 ln(x2 Mathematica

1n[1]

+

x

+

+

1)

:= 1ntegrate[(x+l)/(x*2+x+l),xl

)

1 + 2 x ArcTan( Sqrt[31 Out[ll = [31 Reduce

1/3 31/2 arctan(l/3 (2 x + 1) 31/2)

+

Log[l +

Sqrt

X

+

x2]

2

1: int((x+l)/(x~2+x+l),x);

2*sqrt(3)

2*x + 1 *atan sqrt(3)) + 3*10g(x2 + x + 1)

(

6

Macsyma

(cl) integrate( (x+l)/(xA2+x+l), x); 2 x + l log(x2

Axiom

+ x +

atan( swt(3)

1)

)

(1) -> integrate( (x+l)/(x**2+x+l),x)

Js log(x2 + x +

1) + 2atan

( (2x +3

(1)

2 6 Type: Derive

Union(ExpressionInteger,... )

[Aluthor (x+l)/(xA2+x+l) [C]alculus/[Ilntegrate [SI implify (sqrt(3;(2 x + 1) sqrt(3) ATAN

LN( x2 + x

3

MuPAD

int( (x+l)/(xA2+x+l)

h((x

,x);

+ 1 / 2 ) ~+ 3/4) 2

TI-92

J C (X+l)/(X^2+X+l) ,x)

RECURRENCE EQUATION Solve the recurrence equation s, = -3snP1 - 2s,4 Maple

2

3”’ atan 3

+

1)

COMPUTER ALGEBRA SYSTEMS

Mathematica

299

In[l]:=