Constructive Arithmetics in Ore Localizations of Domains

0 downloads 0 Views 328KB Size Report
Dec 5, 2017 - Though the first claim follows from the second, we give a direct proof of it, which illustrates an important ...... [12] Manuel Kauers, Maximilian Jaroschek, and Fredrik Johansson. Ore polynomials in Sage, 2013. [13] Christoph ...
Constructive Arithmetics in Ore Localizations of Domains Johannes Hoffmanna , Viktor Levandovskyya

arXiv:1712.01773v1 [math.RA] 5 Dec 2017

a Lehrstuhl

D für Mathematik, RWTH Aachen University

Abstract For a non-commutative domain R and a multiplicatively closed set S the (left) Ore localization of R at S exists if and only if S satisfies the (left) Ore property. Since the concept has been introduced by Ore back in the 1930’s, Ore localizations have been widely used in theory and in applications. We investigate the arithmetics of the localized ring S −1 R from both theoretical and practical points of view. We show that the key component of the arithmetics is the computation of the intersection of a left ideal with a submonoid S of R. It is not known yet, whether there exists an algorithmic solution of this problem in general. Still, we provide such solutions for cases where S is equipped with additional structure by distilling three most frequently occurring types of Ore sets. We introduce the notion of the (left) saturation closure and prove that it is a canonical form for (left) Ore sets in R. We provide an implementation of arithmetics over the ubiquitous G-algebras in Singular:Plural and discuss questions arising in this context. Numerous examples illustrate the effectiveness of the proposed approach.

Contents 1 Basics of left Ore localization

3

2 A brief introduction to the left saturation closure

4

3 A constructive approach to the left Ore condition 3.1 The kernel technique . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 The class of G-algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 A partial classification of Ore localizations . . . . . . . . . . . . . . . . . . . . . . . .

7 7 8 9

4 Case study: localizations of the first Weyl algebra 10 4.1 Characteristic zero case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 4.2 Positive characteristic case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 5 Computing the intersection of a 5.1 Monoidal localizations . . . . . 5.2 Geometric localizations of Weyl 5.3 Rational localizations . . . . .

left ideal . . . . . . algebras . . . . . . .

and . . . . . . . . .

a . . .

left Ore set 11 . . . . . . . . . . . . . . . . . . . . 11 . . . . . . . . . . . . . . . . . . . . 13 . . . . . . . . . . . . . . . . . . . . 15

6 Further algorithmic aspects 16 6.1 The right side analogon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6.2 The left-right conundrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Preprint submitted to Elsevier

October 11, 2018

6.3 6.4 6.5

Basic arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Computing inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Canceling a fraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7 Implementation 7.1 Setting, conventions and 7.2 Data structure . . . . . 7.3 Procedures . . . . . . . 7.4 Examples . . . . . . . .

restrictions . . . . . . . . . . . . . . . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

16 16 17 19 19 20 20 21

8 Conclusion and future work

22

9 Acknowledgements

23

Introduction In the beginning of the 1930’s Øystein Ore introduced several algebraic concepts [23, 24], which have seriously influenced the development of algebra and its applications. One of them, an Ore extension of a ring, proved to be a very useful generalization of the construction of commutative rings. Another one is Ore localization, which is utilized very widely from ring theory to algebras of operators, arising in algebraic analysis and algebraic combinatorics. The very formalism of Ore localization was theoretically constructive in its appearance. While computations with finitely presented algebras form a part of computer algebra, traditionally assisted by (non-commutative) Gröbner bases, localization in general allows us to recognize the structure of objects in a variety of non finitely presented algebras. The latter has been intensively used in algebraic geometry and commutative algebra, also accompanied by algorithms and implementations from the 1980’s, see e.g. [8]. It is natural to apply the same philosophy to non-commutative rings, and with this paper we present our investigations for domains. The major task, which we call our Ore Dream, consists in the following: provide procedures and, ideally, algorithms and computer-assisted tools for manipulating left or right fractions in an Ore localization of a domain with respect to a (multiplicatively closed) Ore set S. We investigate this task in detail for a domain R and identify a key problem for algorithmic computations: the intersection of a left ideal with a submonoid S of R. To the best of our knowledge no algorithmic solution to this problem exists if only the monoid structure of S is taken into account. We propose to specify a type of an Ore set according to the presence of additional algebraic structure and address three common types which appear most frequently in applications. For each of these we provide a solution to the key problem and discuss the occuring limitations. The need for Gröbner bases over domains and, in particular, of elimination and syzygies inspired the restriction of the rings under consideration to the broad class of ubiquitous G-algebras (cf. Section 3.2). Historically, perhaps the first connection between the arithmetic operations in the quotient field (which is an example of Ore localization) of a Noetherian domain R and syzygies over R was the paper [2] by Apel and Lassner. They have analyzed the case where R is a universal enveloping algebra of a finitely dimensional Lie algebra. Notably, the extension of these results to the whole class of PBW algebras was completed in [4]. We analyze the approach to arithmetics of fractions in an Ore localization from the point of view of computability. Moreover, we present an implementation olga.lib in the computer algebra 2

system Singular:Plural [7]. To the best of our knowledge, apart from olga.lib and JAS ([17]), which performs similar computations even over parametric solvable polynomial rings [16], no other package can offer constructive computations on that level of generality. However, the price we pay for this is high: generally, Gröbner bases over related rings are invoked for manipulations with fractions both in Plural and in JAS. There are several packages for computer algebra systems dealing with similar situations, most notably OreTools [1] and OreAlgebra [5] in MAPLE, ore_algebra [12] in SAGE, and HolonomicFunctions [13] in MATHEMATICA. These work over predefined algebras, such as univariate algebras of operators (differential, difference and q-difference among most prominent ones, cf. [5]) over a field of rational functions as coefficient domain (these rings are also Ore localizations). In such situations, as investigated e.g. in [9, 25], one can even estimate the complexity of operations. In contrast our development serves a general purpose; in the future one could develop specialized better algorithms for new algebras and/or their Ore subsets. This paper is an extended, expanded and enhanced version of the paper [10], which appeared at ISSAC 2017 in Kaiserslautern, Germany. Proofs have been either restored from the abridged version or expanded in details. A new part on the simplifying procedure for fractions has been added. We enhanced presented examples and added a new Section 4 devoted to a lively case study. We updated the exposition with recent results and publications in the area. In the meantime our implementation olga.lib also has been significantly improved. 1. Basics of left Ore localization In this section we recall the classical material based on Ore’s original paper [23] following a modern exposition inspired by [4]: Definition 1.1. Let R be a domain. A subset S of R is called multiplicatively closed if 1 ∈ S, 0 ∈ /S and for all s, t ∈ S we have s · t ∈ S. Furthermore, S is called a left Ore set if it is multiplicatively closed and satisfies the left Ore condition: for all s ∈ S and r ∈ R there exist s˜ ∈ S and r˜ ∈ R such that s˜r = r˜s. Any subset B of R \ {0} has a minimal multiplicatively closed superset [B], which consists of all finite products of elements of B, where the empty product represents 1. Definition 1.2. Let S be a multiplicatively closed subset of a domain R. A ring RS together with an injective homomorphism ϕ : R → RS is called a left Ore localization of R at S if: 1. For all s ∈ S, ϕ(s) is a unit in RS . 2. For all x ∈ RS there exist s ∈ S and r ∈ R such that x = ϕ(s)−1 ϕ(r). One can show that the Ore localization of R at S exists if and only if S is a left Ore set in R. In this case, the localization is unique up to isomorphism. The classical construction is given by the following: Theorem 1.3. Let S be a left Ore set in a domain R. The relation ∼ on S × R, given by (s1 , r1 ) ∼ (s2 , r2 ) ⇔ ∃˜ s ∈ S ∃˜ r ∈ R : s˜s2 = r˜s1 and s˜r2 = r˜r1 , is an equivalence relation. Now S −1 R := ((S × R)/ ∼, +, ·) becomes a ring via (s1 , r1 ) + (s2 , r2 ) := (˜ ss1 , s˜r1 + r˜r2 ), 3

where s˜ ∈ S and r˜ ∈ R satisfy s˜s1 = r˜s2 , and (s1 , r1 ) · (s2 , r2 ) := (˜ ss1 , r˜r2 ), where s˜ ∈ S and r˜ ∈ R satisfy r˜s2 = s˜r1 . Together with the injective structural homomorphism ρS,R : R → S −1 R,

r 7→ (1, r),

(S −1 R, ρS,R ) is the left Ore localization of R at S. The elements of S −1 R are called left fractions and are denoted by s−1 r or, by abuse of notation, again by (s, r). Some basic facts concerning the localization are collected below: Lemma 1.4. Let S be a left Ore set in a domain R and (s, r) ∈ S −1 R. (a) (b) (c) (d) (e) (f)

0S −1 R = (1R , 0R ) and 1S −1 R = (1R , 1R ). (s, r) = 1 if and only if s = r. (s, r) = 0 if and only if r = 0. Let t ∈ R. If ts ∈ S, then (s, r) = (ts, tr). −(s, r) = (s, −r). S −1 R is a domain.

According to the previous lemma, additive inverses of left fractions are quite easy to find, but what about multiplicative inverses? If r ∈ S, then the inverse of (s, r) is given by (r, s). But there might be other invertible left fractions whose numerators do not belong to S, a phenomenon that even occurs in commutative localizations: Example 1.5. Consider the localization K[x]x2 := [x2 ]−1 K[x], then x ∈ / [x2 ], but (1, x) is invertible 2 with inverse (x , x). We turn to the theory of left saturation closure to find a complete description of the invertible elements of the localization. 2. A brief introduction to the left saturation closure From this point on we present new results unless stated otherwise. In this section let R be a domain. Definition 2.1. A subset S of R with 0 ∈ / S is called left (resp. right) saturated, if for all a, b ∈ R, a · b ∈ S implies b ∈ S (resp. a ∈ S). Furthermore, S is called saturated if it is both left and right saturated. While the notion of multiplicative closure only depends on the multiplication and is unchanged under embedding R into a larger ring, being saturated involves factorization and thus heavily depends on the context: the set S := Z \ {0} is both multiplicatively closed and saturated in Z. In / S. Q it is still multiplicatively closed, but no longer saturated, since 2 · 21 = 1 ∈ S, but 12 ∈ Definition 2.2. Let S be a multiplicatively closed subset of R. The left saturation closure of S is LSat(S) := {r ∈ R | ∃w ∈ R : wr ∈ S}. 4

Since 1 ∈ S ⊆ R we have S ⊆ LSat(S), in particular 1 ∈ LSat(S). Furthermore, 0 ∈ / LSat(S) since 0 ∈ / S. The following lemma justifies the name “left saturation closure”. Lemma 2.3. Let S be a multiplicatively closed subset of R. (a) LSat(S) is left saturated. (b) S is left saturated if and only if S = LSat(S). (c) LSat(S) is the smallest left saturated superset of S with respect to inclusion. Proof. (a) Let a, b ∈ R such that ab ∈ LSat(S), then there exists w ∈ R such that wab ∈ S, thus b ∈ LSat(S). (b) Let r ∈ LSat(S), then there exists w ∈ R such that wr ∈ S. If S is left saturated this implies r ∈ S, thus S = LSat(S). For the reverse implication note that LSat(S) is left saturated by the previous result. If S = LSat(S), then so is S. (c) Let Q ⊆ R be a left saturated set such that S ⊆ Q ⊆ LSat(S). Let r ∈ LSat(S), then there exists w ∈ R such that wr ∈ S ⊆ Q. Since Q is left saturated we have r ∈ Q and thus Q = LSat(S).

Denote the set of units of R by U (R). Now we answer the question posed at the end of the last section concerning invertible elements: Proposition 2.4. Let S be a left Ore set in R and (s, r) ∈ S −1 R. Then the following are equivalent: (1) (2) (3) (4)

(s, r) ∈ U (S −1 R). (1, r) ∈ U (S −1 R). −1 r ∈ ρ−1 R)) S,R (U (S r ∈ LSat(S).



(1, r) = ρS,R (r) ∈ U (S −1 R).

Proof. Statements (2) and (3) are equivalent since ρS,R (r) = (1, r). Furthermore, (s, r) = (s, 1) · (1, r), where (s, 1) ∈ U (S −1 R) with inverse (1, s), which shows the equivalence of (1) and (2). Starting from (2), let (1, r) ∈ U (S −1 R), then there exists (s, w) ∈ S −1 R such that 1 = (s, w) · (1, r) = (s, wr), which implies wr = s ∈ S, thus r ∈ LSat(S) and we have reached (4). For the reverse implication, let r ∈ LSat(S), then there exists w ∈ R such that wr ∈ S. Now (wr, w) · (1, r) = (wr, wr) = 1 implies that (1, r) ∈ U (S −1 R). This implies that the left saturation closure of left Ore sets is actually saturated on both sides: Lemma 2.5. Let S be a left Ore set in R. Then LSat(S) is saturated. Proof. Let a, b ∈ R such that a · b ∈ LSat(S). Then (1, ab) is a unit in S −1 R by the previous result. Now (1, a) · (1, b) = (1, ab) implies that both (1, a) and (1, b) are also units in S −1 R since S −1 R is a domain, thus a, b ∈ LSat(S) again by the previous result. Therefore a left Ore set S is saturated if and only if S = LSat(S). Apart from this characterization, the left saturation closure has even more interesting applications. For a start we see that LSat preserves and reflects the left Ore condition: 5

Lemma 2.6. Let S be a multiplicatively closed subset of R. Then S satisfies the left Ore condition in R if and only if LSat(S) satisfies the left Ore condition in R. Proof. Let x ∈ LSat(S), r ∈ R and w ∈ R such that wx ∈ S. If S satisfies the left Ore condition, then there exist s˜ ∈ S and r˜ ∈ R such that s˜r = r˜wx. Since s˜ ∈ S ⊆ LSat(S) and r˜w ∈ R this implies that LSat(S) satisfies the left Ore condition. For the other implication, let r ∈ R and s ∈ S ⊆ LSat(S). If LSat(S) satisfies the left Ore condition, then there exist x ∈ LSat(S) and r˜ ∈ R such that xr = r˜s. Let w ∈ R such that sˆ := wx ∈ S and define rˆ := w˜ r ∈ R, then sˆr = wxr = w˜ r s = rˆs shows that S satisfies the left Ore condition. While the left saturation closure of multiplicatively closed sets is not multiplicatively closed in general, the left Ore condition is sufficient to overcome this problem. Proposition 2.7. Let S be a left Ore set in R. Then LSat(S) is a left Ore set in R and S −1 R ∼ = LSat(S)−1 R. Proof. For the first part it remains to show that LSat(S) is multiplicatively closed: let x, y ∈ LSat(S), then there exist a, b ∈ R such that ax, by ∈ S. By the left Ore condition on S there exist s˜ ∈ S and r˜ ∈ R such that s˜b = r˜ax. Now w := r˜a ∈ R and wxy = s˜by ∈ S shows that xy ∈ LSat(S). For the second part, consider the map ψ : S −1 R → LSat(S)−1 R,

(s, r) 7→ (s, r),

which can be shown to be an injective homomorphism of rings with standard Ore-style calculations. To see surjectivity, consider a fraction (x, r) ∈ LSat(S)−1 R and w ∈ R such that wx ∈ S, then (x, r) = ψ((wx, wr)). From this we immediately get a sufficient condition for two localizations of R being isomorphic: Corollary 2.8. Let S, T be left Ore sets in a domain R. If LSat(S) = LSat(T ), then S −1 R ∼ = T −1 R. Lemma 2.9. Let S, T be multiplicatively closed sets in R. Then S ⊆ LSat(T ) if and only if LSat(S) ⊆ LSat(T ). Proof. Let S ⊆ LSat(T ) and x ∈ LSat(S), then there exists w ∈ R such that wx ∈ S ⊆ LSat(T ). But then vwx ∈ T for some v ∈ R, which implies x ∈ LSat(T ) and thus LSat(S) ⊆ LSat(T ). The other implication is obvious from S ⊆ LSat(S). Remark 2.10. From a theoretical viewpoint the left saturation closure is a powerful tool that gives us a canonical form of left Ore sets with respect to the corresponding localization. For instance, in Proposition 2.4 we have seen that knowing LSat(S) is equivalent to knowing U (S −1 R). Unfortunately, the left saturation closure is - depending on the situation - difficult or maybe even impossible to compute or even represent in finite terms. In Section 4 we discuss a left Ore set, generated by two elements, which has a infinitely (though countably) generated left saturation closure. In general we do not even expect the saturation closure to be countably generated. In our opinion, these facts need not be perceived as “bad news”, but rather as an indication that the objects we are dealing with are intrinsically complicated.

6

3. A constructive approach to the left Ore condition Given a left Ore set S in a domain R and (s, r) ∈ S × R, we are interested in constructively finding solutions of the left Ore condition. We will consider the following sub-problems: (1) Find s˜ ∈ S such that there exists r˜ ∈ R satisfying s˜r = r˜s. (2) Find the set of all s˜ ∈ S such that there exists r˜ ∈ R satisfying s˜r = r˜s. (3) Given s˜ ∈ S such that there exists r˜ ∈ R satisfying s˜r = r˜s, find such r˜. From a theoretic viewpoint, all solutions of the left Ore condition are equivalent in the sense of Theorem 1.3: Lemma 3.1. Let S be a left Ore set in a domain R and (s, r) ∈ S × R. Furthermore, let (s1 , r1 ), (s2 , r2 ) ∈ S × R such that s1 r = r1 s and s2 r = r2 s, then (s1 , r1 ) ∼ (s2 , r2 ). Proof. Let sˆ ∈ S and rˆ ∈ R such that sˆs1 = rˆs2 , then sˆr1 s = sˆs1 r = rˆs2 r = rˆr2 s implies sˆr1 = rˆr2 and thus (s1 , r1 ) ∼ (s2 , r2 ). Nevertheless, the algorithms presented later strive to compute a solution to problem (2) above, since even checking the equivalence of two left fractions is a non-trivial task. As a last observation, note that the solution to problem (3) is unique: if there exist r1 , r2 ∈ R such that r2 s = s˜r = r1 s, then (r1 − r2 )s = 0, which implies r1 = r2 since R is a domain and s 6= 0. 3.1. The kernel technique Let S be a multiplicatively closed subset of a domain R. To the best of our knowledge there is no algorithm to decide whether S is a left Ore set in R. Usually such facts are established by the means of theoretical proofs. To deal with this issue, consider the map ·r

ϕs,r : R −→ R/Rs,

x 7→ xr + Rs.

It is a homomorphism of left R-modules and the intersection of its kernel with S is exactly the solution of (2) from above. This immediately gives us the following characterization of the left Ore property: Proposition 3.2. The following are equivalent: (1) S satisfies the left Ore condition in R. (2) For all (s, r) ∈ S × R, ker(ϕs,r ) ∩ S 6= ∅. Remark 3.3. Provided we can check algorithmically whether ker(ϕs,r ) ∩ S is empty, Proposition 3.2 also allows us to constructively prove that a given set S is not a left Ore set in R, if we (correctly) suspect s and r to violate the Ore condition (see Example 5.8). After choosing a s˜ ∈ ker(ϕs,r ) ∩ S, r˜ is the unique solution of the equation s˜r = r˜s. These considerations are combined in the procedure LeftOre.

7

Algorithm 1: LeftOre Input: (s, r) ∈ S × R. Output: (˜ s, r˜) ∈ S × R such that s˜r = r˜s. 1 begin 2 let ϕ : R → R/Rs, x 7→ xr − Rs; 3 compute ker(ϕ); 4 compute any s˜ ∈ ker(ϕ) ∩ S; 5 compute the unique solution r˜ ∈ R of the equation s˜r = r˜s; 6 return (˜ s, r˜) 7 end

Remark 3.4. Instead of ϕs,r one could also consider the seemingly similar map ·s

ψs,r : R −→ R/Rr,

x 7→ xs + Rr.

Here we avoid intersecting ker(ψs,r ) with S at first, but the kernel may contain numerous false candidates for r˜: although there exists y ∈ R such that yr = r˜s, there is no guarantee that we can find such y in S. Therefore we need to intersect M := {(˜ r , s˜) ∈ ker(ψs,r ) × R | r˜s = s˜r} with R × S, which poses the same problems as intersecting ker(ϕs,r ) with S directly. For this reason we prefer to work with ϕs,r . To be able to actually carry out the computations from the procedure LeftOre and turn it into an algorithm we need to work in a computation-friendly setting in which all intermediate steps can be carried out. 3.2. The class of G-algebras Definition 3.5. For a field K, n ∈ N and 1 ≤ i < j ≤ n consider non-zero constants cij ∈ K and polynomials dij ∈ K[x1 , . . . , xn ]. Suppose that there exists a monomial total well-ordering < on K[x1 , . . . , xn ], such that for any 1 ≤ i < j ≤ n either dij = 0 or the leading monomial of dij with respect to < is smaller than xi xj . The K-algebra A := Khx1 , . . . , xn | {xj xi = cij xi xj + dij : 1 ≤ i < j ≤ n}i αn 1 is called a G-algebra, if {xα 1 · . . . · xn : αi ∈ N0 } is a K-basis of A.

G-algebras [20, 18] are also known as algebras of solvable type [11, 16, 15] and as PBW algebras [4]. G-algebras are left and right Noetherian domains that occur naturally in various situations and encompass algebras of linear functional operators modeling difference and differential equations. Example 3.6. Let K be a field, qi ∈ K \ {0} and n ∈ N. Common G-algebras include the following examples, where only the relations between non-commutating variables are listed: • The commutative polynomial ring K[x1 , . . . , xn ]. • The n-th Weyl algebra An := Khx1 , . . . , xn , ∂1 , . . . , ∂n i with ∂i xi = xi ∂i + 1 for all 1 ≤ i ≤ n. • The n-th shift algebra Sn := Khx1 , . . . , xn , s1 , . . . , sn i with si xi = xi si + si = (xi + 1)si for all 1 ≤ i ≤ n. 8

(q)

• The n-th q-shift algebra Sn := Khx1 , . . . , xn , s1 , . . . , sn i with si xi = qi xi si for all 1 ≤ i ≤ n. (q)

• The n-th q-Weyl algebra An 1 ≤ i ≤ n.

:= Khx1 , . . . , xn , ∂1 , . . . , ∂n i with ∂i xi = qi xi ∂i + 1 for all

• The n-th integration algebra Khx1 , . . . , xn , I1 , . . . , In i with Ii xi = xi Ii + Ii2 for all 1 ≤ i ≤ n. Furthermore, there exists a well-developed Gröbner basis theory for G-algebras which is close to the commutative case and not only allows us to explicitly compute ker(ϕs,r ) (which is finitely generated), but also to solve the equation s˜r = r˜s for r˜ via division with remainder. Details can be found in [18]. Note that the basic concept of LeftOre can be adapted to other, yet more general settings. 3.3. A partial classification of Ore localizations The only remaining problem to solve is the intersection of a left ideal with a left Ore set. Unfortunately, due to their multiplicative nature, left Ore sets are seldom finitely generated as monoids, therefore we have to single out interesting classes of Ore sets and deal with them individually. To this end, we propose the following partial classification of left Ore localizations: Definition 3.7. Let K be a field and R a K-algebra and a domain. • Let S be a left Ore set in R that is generated as a multiplicative monoid by at most countably many elements. Then S −1 R is called a monoidal localization. • Let n ∈ N, K[x] := K[x1 , . . . , xn ] a subring of R and p a prime ideal in K[x], then S := K[x] \ p is multiplicatively closed. If S is a left Ore set in R, then S −1 R is called a geometric localization. • Let T be a K-subalgebra of R, then S := T \ {0} is multiplicatively closed. If S is a left Ore set in R, then S −1 R is called a (partial) rational localization. All three types of localizations have commutative counterparts: Example 3.8. Let R be a commutative domain and K a field. • Let f ∈ R \ {0}, then Rf = [f ]−1 R = {f k | k ∈ N0 }−1 R is a monoidal localization. • Let p be a prime ideal in the polynomial ring K[x], then K[x]p = (K[x] \ p)−1 K[x] is a geometric localization. • Quot(R) = (R \ {0})−1R is a rational localization, as well as K(x)[y] = (K[x] \ {0})−1K[x, y]. An important instance of rational localization is the following generalization of the classical quotient field construction: Definition 3.9. If S := R \ {0} is a left Ore set in a domain R, then R is called a left Ore domain. The localization S −1 R is called left quotient (skew) field of R and denoted Quot(R). Therefore, any left Ore domain can be embedded into a division ring. This holds in particular for any G-algebra.

9

4. Case study: localizations of the first Weyl algebra In this section we want to explore at an example how one can utilize left saturation not only to gain theoretical insight but also as a preprocessing step before attempting any computations in a computer algebra system. To this end we will consider a field K and A1 = A1 (K) = Khx, ∂ | ∂x = x∂ + 1i, the first Weyl algebra over K. We are now interested in making x and ∂ invertible, which means finding a left Ore set in A1 that contains V := [x, ∂]. By the forthcoming Proposition 5.7 we have that [x] and [∂] already are left Ore sets in A1 , thus V := [[x] ∪ [∂]] is indeed a left Ore set in A1 as a multiplicatively closed set generated by left Ore sets (Lemma 4.1 in [14]). The Euler operator in A1 is defined as θ := x · ∂ ∈ A1 . The following “commutation rules” can be proven by induction: Lemma 4.1. For all m, n ∈ N0 and all z ∈ K we have (θ + z)m xn = xn (θ + z + n)m

and

∂ n (θ + z)m = (θ + z + n)m ∂ n .

Now we are able to compute the left saturation closure of V with some additional knowledge about factorizations: Proposition 4.2. Let p := char(K) ∈ P ∪ {0}. Then LSat(V ) = [{x, ∂} ∪ (θ + ((Z/pZ) \ {0, 1})) ∪ (K \ {0})]. Proof. Let w ∈ Z/pZ and S be the right-hand side of the equation in the claim. First consider the case p = 0: if w ∈ N0 , then xw (θ + w) = θxw = x∂xw ∈ V , if −w ∈ N, then ∂ −w (θ + w) = (θ + w − w)∂ −w = θ∂ −w = x∂ 1−w ∈ V . If p > 0 we can always find n ∈ N0 such that w = n + pZ and we get xn (θ + w) = θxn ∈ V . In any case we see that θ + w ∈ LSat(V ). Furthermore, for any k ∈ K \ {0} we have k −1 · k = 1 ∈ V , thus S ⊆ LSat(V ). Q Now let r ∈ LSat(V ), then there exists w ∈ A1 such that wr ∈ V , thus wr = ni=1 si for some n ∈ N0 and si ∈ {x, ∂}. Using the commutation rules from Lemma 4.1 we can rewrite wr into wr = p · q · tm , where t ∈ {x, ∂}, m ∈ N0 , p ∈ [θ + ((Z/pZ) \ {0, 1})] and q ∈ [θ, θ + 1] (note that θ + w1 and θ + w2 commute for w1 , w2 ∈ K). According to Lemma 2.6 in [6] any other non-trivial factorization of wr (like wr itself) can be derived by using the commutation rules and rewriting θ resp. θ + 1 as x∂ resp. ∂x. But all factors that can be created in this way are already contained in S, thus r ∈ S and therefore LSat(V ) ⊆ S. We will now see that this localization behaves fundamentally different depending on p. We need the notion of the Gelfand-Kirillov dimension (GKdim, see [14, 22]), which is defined for both rings and modules with respect to a fixed field K. Over Noetherian domains its behavior is somewhat similar to that of Krull dimension (Krdim) over commutative rings. Note that for any field K one has GKdimK (A1 (K)) = 2. 4.1. Characteristic zero case Lemma 4.3. Let K be an algebraically closed field of characteristic p = 0. Then (a) GKdim(V −1 A1 ) = 3, (b) GKdim(Quot(A1 )) = ∞.

10

Proof. The first claim follows from Example 4.11 in [14], while the second result is due to MakarLimanov, who showed in [21] that Quot(A1 ) contains a free subalgebra generated by two elements. In particular, we have GKdim(A1 ) < GKdim(V −1 A1 ) < GKdim(Quot(A1 )). 4.2. Positive characteristic case In positive characteristic the Weyl algebra has center K[xp , ∂ p ], while in the case of characteristic zero the center is just K. Lemma 4.4. Let K be of characteristic p > 0. Then (a) GKdim(V −1 A1 ) = 2, (b) GKdim(Quot(A1 )) = 2. In particular, any localization of A1 has Gelfand-Kirillov dimension 2. Proof. Though the first claim follows from the second, we give a direct proof of it, which illustrates an important technique. Let T := [xp , ∂ p ], then clearly LSat(V ) = LSat([x, ∂]) = LSat([xp , ∂ p ]) = LSat(T ). Since T is contained in the center of A1 it is also a left Ore set in A1 , thus V −1 A1 ∼ = T −1 A1 by Corollary 2.8. By Proposition 4.2 in [14] the Gelfand-Kirillov dimension does not change when passing from A1 to a central localization of A1 like T −1 A1 , which implies GKdim(V −1 A1 ) = GKdim(T −1 A1 ) = GKdim(A1 ) = 2. Now we proceed with the second claim. Consider Z = K[xp , ∂ p ], the center of A1 (K), then GKdim(Z) = Krdim(Z) = 2 by Corollary 4.4 in [14] since Z is a commutative domain. Moreover, the latter also implies that Z \ {0} is an Ore set in A1 (K). We claim that (Z \ {0})−1 A1 = (A1 \ {0})−1 A1 = Quot(A1 ), in other words, LSat(Z \ {0}) = A1 \ {0}. It is enough to show that any left ideal {0} 6= L ⊂ A1 has a non-zero intersection with Z. Suppose that there is L such that L ∩ Z = {0}. By Lemma 8.5 in [14] it follows that then GKdim(A1 /L) ≥ GKdim(Z) holds. Thus 2 = GKdim(A1 ) ≥ GKdim(A1 /L) ≥ 2 and therefore by Corollary 8.6 in [14] L = {0} follows. 5. Computing the intersection of a left ideal and a left Ore set This section provides the theory and algorithms to compute a representation of the intersection of a left ideal I with a left Ore set S, where S belongs to one of the three types stated above, within the setting of a G-algebra A over a field K. To avoid rather trivial cases we will assume that I is neither the zero ideal nor the whole algebra, i.e. {0} ( I ( A. 5.1. Monoidal localizations Monoidal localization allows us to adjoin inverses of certain elements, which for example describes the transition from the polynomial ring K[x] to the Laurent polynomial ring K[x, x−1 ]. For now, let S = [g] for some g ∈ A \ K. To the best of our knowledge it is not possible in general to decide whether I ∩ S is empty, but in some cases we can give a positive answer:

11

Remark 5.1. Since A is a domain so is K[S] = K[g] ⊆ A, the K-monoid algebra of S. Assume that we are able to compute L := I ∩ K[S]1 . If L = {0}, then in particular I ∩ S = ∅ since I ∩ S = L ∩ S. In the following we will assume I ∩ S 6= ∅. For our purposes this is not a restriction since we are mostly interested in the case where I = ker(ϕs,r ) for some s ∈ S and r ∈ R. Then I ∩ S 6= ∅ follows from Proposition 3.2. If I ∩ S is non-empty it has the structure of a principal monoid ideal: Lemma 5.2. Let g ∈ A \ K. If I ∩ [g] 6= ∅, then there exists m ∈ N such that I ∩ [g] = g m · [g]. Proof. If I ∩ [g] 6= ∅, then there exists a minimal m ∈ N such that g m ∈ I and g k ∈ / I for all k < m. Now g j = g j−m · g m ∈ I for any j ≥ m, thus I ∩ [g] = g m · [g]. The natural thing to do is to iterate over the natural numbers to find the smallest m among them such that g m ∈ I. This membership test can be done by computing normal forms in the Gröbner sense: in the algorithm MonoidalIntersection, NF(g m |F ) denotes the normal form of g m with respect to F . To avoid unnecessary expensive normal form computations we use another fact from Gröbner basis theory: the leading monomial of any element of I must be divisible by the leading monomial of an element of a Gröbner basis of I. Since we know that at least one power of g is contained in I, we find the minimal m such that lm(g m ) is divisible by the leading monomial of any basis element, which can be done by comparing the leading exponents. Algorithm 2: MonoidalIntersection Input: Gröbner basis F = {f1 , . . . , fk } of I and g ∈ R \ K with I ∩ [g] 6= ∅. Output: m ∈ N such that I ∩ [g] = g m · [g]. 1 begin 2 foreach 1 ≤ i ≤ k do 3 mi := min{k ∈ (N ∪ {∞}) : lm(fi )| lm(g)k }; 4 end 5 m := min{mi | 1 ≤ i ≤ k}; 6 while NF(g m |F ) 6= 0 do 7 m := m + 1; 8 end 9 return m; 10 end While multiplicatively closed sets generated by infinitely many elements are out of scope for computational purposes, we still have to deal with finite sets of generators. To reduce this to the case of one generator, we generalize the following classical result: let f1 , . . . , fk ∈ K[x] := K[x1 , . . . , xn ], S := [f1 , . . . , fk ] and T = [f1 · . . . · fk ], then S −1 K[x] ∼ = T −1 K[x]. Lemma 5.3. Let R be a domain and g1 , . . . , gk ∈ R \ {0} such that gi gj = gj gi for all i and j. Consider S = [g1 , . . . , gk ] and T = [g] for g := g1 · . . . · gk . 1 See

[19] for conditions and further details.

12

(a) S is a left Ore set in R if and only if T is a left Ore set in R. (b) If S and T are left Ore sets, then S −1 R ∼ = T −1 R. Proof. By construction, S and T are multiplicatively closed sets such that T ⊆ S. Since the gi commute we have (g1 · . . . · gj−1 · gj+1 · . . . · gk )gj = g ∈ T which implies gj ∈ LSat(T ) and thus S ⊆ LSat(T ), since the gj generate S as a monoid. Together with T ⊆ S ⊆ LSat(S) we get LSat(S) = LSat(T ) by applying Lemma 2.9 twice. Now the first part follows from Lemma 2.6, the second from Proposition 2.7. 5.2. Geometric localizations of Weyl algebras Let n ∈ N and p be a prime ideal in R := K[x1 , . . . , xn ] ( An . Then R \ p is a left Ore set in An and we can consider the geometric localization (R \ p)−1 An . The most common occurrence of this localization is the special case where we replace p by the maximal ideal mp in R corresponding to a point p ∈ K n . The result is the so-called local (algebraic) Weyl algebra An,p := (R \ mp )−1 An , which is important in D-module theory. The main theoretical result in this paragraph is that the Weyl algebras contain a multitude of left Ore sets. To prove this we first need some technical results. Note that due to the relations in ∂f An we have f ∂j = ∂j f + ∂x for all f ∈ R. j Lemma 5.4. Let f ∈ R and j ∈ {1, . . . , n}. For all i ∈ N0 we have   ∂f i+1 f i. f ∂j = ∂j f − (i + 1) ∂xj   ∂f ∂f f 0 . Assume = ∂ f − 1 · Proof. Induction on i ∈ N0 : let i = 0, then f 1 ∂j = f ∂j = ∂j f − ∂x j ∂x j j that the claim holds for i ∈ N0 , then we have     ∂f ∂f i+2 i+1 i f ∂j = f f ∂j = f ∂j f − (i + 1) f = f ∂j f − (i + 1)f fi ∂xj ∂xj     ∂f ∂f ∂f i+1 ∂j f − f f i+1 − (i + 1) = f ∂j − (i + 1) ∂xj ∂xj ∂xj   ∂f f i+1 . = ∂j f − (i + 2) ∂xj

Lemma 5.5. Let f ∈ R, i ∈ N0 and β ∈ Nn0 such that i + 1 ≥ |β|. Then there exists vi+1,β ∈ An such that (i) tdeg∂ (vi+1,β ) < |β|, (ii) vi+1,β only contains partial derivatives of f of the form (iii) f i+1 ∂ β = (∂ β f |β| + vi+1,β )f i+1−|β| .

13

∂ |α| f ∂xα ,

where β − α ∈ Nn0 and

Proof. Induction on |β| ∈ N0 : if |β| = 0, then β = 0. Set vi+1,0 := 0, then f i+1 ∂ β = f i+1 = (∂ β f |β| + vi+1,0 )f i+1−|β| . Now let β ∈ Nn0 \ {0} and assume the claim holds for all α ∈ Nn0 with |α| < |β|. Then β = α + ej for some j ∈ {1, . . . , n} and α ∈ Nn0 with |α| < |β|. Now f i+1 ∂ β = f i+1 ∂ α ∂j (∂ α f |α| + vi+1,α )f i+1−|α| ∂j = ∂ α f i+1 ∂j + vi+1,α f i+1−|α| ∂j     ∂f ∂f α i ∂j f − (i + 1) =∂ f + vi+1,α ∂j f − (i + 1 − |α|) f i−|α| ∂xj ∂xj   ∂f α 1+|α| α ∂f |α| f i−|α| = ∂ ∂j f − (i + 1)∂ f + vi+1,α ∂j f − (i + 1 − |α|)vi+1,α ∂xj ∂xj = (∂ β f |β| + vi+1,β )f i+1−|β| , ∂f ∂f |α| where vi+1,β := −(i + 1)∂ α ∂x f + vi+1,α ∂j f − (i + 1 − |α|)vi+1,α ∂x satisfies the conditions j j above.

Lemma 5.6. Let f ∈ R, r ∈ An , d := tdeg∂ (r) and k ∈ N0 . Then there exist r˜, rˆ ∈ An such that f d+k · r = r˜ · f k

and

r · f d+k = f k · rˆ.

P bβ ∂ β , where bβ ∈ R. Since d + k ≥ |β| for all β such that bβ 6= 0, by Proof. Let r = β∈Nn 0 Lemma 5.5 there exist vd+k,β ∈ An such that f d+k ∂ β = (∂ β f |β| + vd+k,β )f d+k−|β| = (∂ β f |β| + vd+k,β )f d−|β| f k . Define r˜ :=

X

bβ (∂ β f |β| + vd+k,β )f d−|β| ,

β∈Nn 0

then f d+k · r = f d+k ·

X

β∈Nn 0

bβ ∂ β =

X

bβ f d+k ∂ β =

β∈Nn 0

X

bβ (∂ β f |β| + vd+k,β )f d−|β| f k = r˜ · f k .

β∈Nn 0

The other statement can be shown analogously using a right-sided version of Lemma 5.5. Proposition 5.7. Let S be a multiplicatively closed set in R = K[x1 , . . . , xn ] and T a multiplicatively closed set in K[∂1 , . . . , ∂n ]. Then S and T are left and right Ore sets in An . Proof. Since S is a multiplicatively closed set in R it is also a multiplicatively closed set in An . Let r ∈ An and s ∈ S. By Lemma 5.6 there exist r˜, rˆ ∈ An such that sd+1 · r = r˜ · s and r · sd+1 = s · rˆ, where d := tdeg∂ (r). Since S is multiplicatively closed we have sd+1 ∈ S, therefore S satisfies the left and the right Ore condition in An , thus S is a left and right Ore set in An . The statement for T follows from analogous calculations. This implies that any multiplicatively closed set in R is a left and right Ore set in An , in particular we have that geometric localization of An is possible at the complement of any prime ideal p in R. But even in closely related G-algebras like the shift algebra this does not need to hold, as the following example demonstrates: 14

Example 5.8. Consider the prime ideal p = hx + 1i in K[x] ⊆ S1 , then for the pair (x, s) ∈ (K[x] \ p) × S1 a simple computation delivers ker(ϕx,s ) = hx + 1i = p. Therefore ker(ϕx,s ) ∩ S = ∅, so K[x] \ p is not a left Ore set in S1 by Proposition 3.2. Thus the main application of the geometric type is localizing the n-th Weyl algebra An at the left Ore set S := R \ p, where p is a prime ideal in R. In contrast to the two other types of localizations, the intersection of I and S has no exploitable additional structure: while it is a multiplicatively closed set without 1, it need not be finitely generated. Therefore, in the algorithm GeometricIntersection, we return essentially the intersection of I and R, which can be computed via Gröbner-driven elimination of variables. Algorithm 3: GeometricIntersection Input: A left ideal I of An , S, R and p as above. Output: A left ideal J in R such that I ∩ S = J \ p. 1 begin 2 compute I˜ := I ∩ R = hm1 , . . . , mk i; 3 foreach 1 ≤ i ≤ k do 4 let m ˜i be the normal form of mi with respect to p; 5 end 6 return J := hm˜1 , . . . , m˜k i; 7 end An element f ∈ I ∩ R is an element of I ∩ S if and only if f ∈ / p, which can be checked by computing the normal form of f with respect to p. Proposition 5.9. In the situation of the algorithm GeometricIntersection, I ∩ S = ∅ if and only if m ˜i = 0 for all i. Proof. By construction, m ˜i = 0 for all i if and only if mi ∈ p for all i, which is equivalent to I ∩ R ⊆ p. From I ∩ R = I ∩ ((R \ p) ∪ p) = I ∩ (S ∪ p) = (I ∩ S) ∪ (I ∩ p) we can see that this is equivalent to I ∩ S = ∅, since I ∩ S ⊆ R \ p. Thus, if I ∩ S 6= ∅, a member of this intersection can be found among the non-zero generators of J. 5.3. Rational localizations In algebras of linear operators, rational localization provides the formal mechanism of passing from polynomial to rational coefficients, for example from the polynomial Weyl algebra A1 to the first rational Weyl algebra (K[x] \ {0})−1 A1 . To set the scene, let A be generated as a G-algebra by the variables x1 , . . . , xn and let V ⊆ {1, . . . , n} such that {xi | i ∈ V } generate a subalgebra B of A and S := B \ {0} is a left Ore set in A. If we can eliminate the variables {xi | i ∈ {1, . . . , n} \ V } with Gröbner-driven elimination2 , then the algorithm RationalIntersection computes the intersection of S and I. 2 In

contrast to the commutative case, this is not always possible, see [19, 18].

15

Algorithm 4: RationalIntersection Input: A left ideal I of A, B as above. Output: The intersection I ∩ S. 1 begin 2 compute J := I ∩ B via elimination; 3 return J \ {0}; 4 end

6. Further algorithmic aspects 6.1. The right side analogon While we concentrate mostly on the left-sided version of non-commutative structures, the rightsided analogues of the given definitions and results hold as well, which can also be seen by considering opposite structures: Definition 6.1. Let (R, +, ·) be a ring, then the opposite ring of R is Rop := (R, +, ∗), where a ∗ b := b · a for all a, b ∈ R. In particular, a right Ore set in R is a left Ore set in Rop . Most algorithms for non-commutative structures in Singular:Plural are only implemented for the left-sided versions, while right-sided computations are carried out in a left-sided setting in the opposite ring. Note that there are special tools for handling opposite rings and the process of creating opposite objects. 6.2. The left-right conundrum Another classical result in the theory of Ore localization is the following: if a multiplicative subset S of a domain R is both left and right Ore, then the left Ore localization S −1 R is isomorphic to the right Ore localization RS −1 via RS −1 → S −1 R,

rs−1 7→ s˜−1 r˜,

where s˜r = r˜s. Given a right fraction rs−1 ∈ RS −1 , finding a corresponding left fraction s˜−1 r˜ ∈ S −1 R is therefore just another application of the left Ore condition, while computing the inverse image of a left fraction requires the right Ore condition. 6.3. Basic arithmetic If we examine Theorem 1.3 closely we can see that addition and multiplication in S −1 R only consist of computing one instance of the left Ore condition as well as some basic additions and multiplications in the base ring R, which directly gives us algorithms for addition and multiplication. 6.4. Computing inverses Additive inverses are given by −(s, r) = (s, −r), but, as we have seen earlier, multiplicative inverses are immensely more complicated. Proposition 2.4 tells us that a fraction (s, r) is invertible if and only if r ∈ LSat(S), thus deciding invertibility of a fraction is not harder than computing LSat(S). After Remark 2.10 we do not expect LSat(S) to be presentable in finite terms. However, in the case of geometric localizations at a prime ideal p we are in the fortunate situation that S is already saturated: 16

Lemma 6.2. Let p be a prime ideal in R := K[x1 , . . . , xn ]. Then both R \ {0} and S := R \ p are saturated in An . Proof. For the first part, consider a global monomial ordering where ∂i > xj for all i, j. Let a, b ∈ An such that a · b ∈ R \ {0}, then a and b are non-zero, thus tdeg∂ (a) + tdeg∂ (b) = tdeg∂ (a · b) = 0. Therefore both a and b are contained in R \ {0}. Now let a · b ∈ S ⊆ R \ {0}, then by the first part we have a, b ∈ R \ {0}. Since a ∈ p and b ∈ p both imply a · b ∈ p, we have a, b ∈ R \ p = S. Thus we have that a fraction (s, r) in a geometric localization of An at p is invertible if and only if r ∈ K[x1 , . . . , xn ] \ p; the latter condition can be checked algorithmically with commutative Gröbner methods. Unfortunately, S will not be saturated in general when we consider the other localization types. A closer look at the definition of left saturation closure yields the following insight: Lemma 6.3. Let S be a left Ore set in a domain R and r ∈ R. Then r ∈ LSat(S) if and only if Rr ∩ S 6= ∅. Therefore we can decide invertibility of a given fraction if we can decide non-emptiness of the intersection of S with a principal left ideal. For rational localizations this can be checked with the usual Gröbner tools, but for monoidal localizations this is still an open problem as stated before. Some non-units might be identified with the technique described in Remark 5.1. 6.5. Canceling a fraction Given a representation (s, r) of a fraction it is a natural question to ask whether there is a simpler representation (s′ , r′ ) of the same fraction, for example a representation where the total degree of the denominator s′ is smaller than the one of s. Canceling a fraction between other computation steps can have a significant impact on the total computation time. Given that we are not in a unique factorization domain there may be many different representations that we could call simpler than the initial one, thus it is also of interest to find all simpler representations. We present two approaches to this problem. To this end, let (s, r) ∈ S × R be a representation of a fraction in a left Ore localization S −1 R of a G-algebra R. We want to compute (at least) Cs,r := {(ˆ s, rˆ) ∈ S × R | ∃f ∈ R : f sˆ = s and f rˆ = r}, the set of all representations of (s, r) that can be constructed from (s, r) by left canceling. 6.5.1. Syzygy-based canceling The first approach is based on computing right and left syzygies (denoted RSyz resp. LSyz below), which can be done with Gröbner-driven algorithms. Let  M := RSyz( s

  T r ) = { a b ∈ R2×1 | sa + rb = 0}.

Note that M 6= {0} since any G-algebra is right Noetherian and thus a right Ore domain. For any  T  T   a b ∈ M let Na,b := LSyz( a b ) = { q p ∈ R1×2 | qa + pb = 0}. We also have Na,b 6= {0}   since s r ∈ Na,b .  Lemma 6.4. Let a

b

T

∈ M \ {0}.

17

(a) We have Cs,r ⊆ Na,b . (b) Let (q, p) ∈ Na,b with q ∈ S. Then (q, p) = (s, r) in S −1 R.   Proof.  (a) Let sˆ rˆ ∈ Cs,r , then there exists f ∈ R \ {0} such that s = f sˆ and r = f rˆ. Since sa + rˆb), which implies sˆa + rˆb = 0 and thus a b ∈ M we have 0 = sa + rb = f sˆa + f rˆb = f (ˆ sˆ rˆ ∈ Na,b . (b) By the left Ore condition on S there exist s˜ ∈ S and r˜ ∈ R such that s˜q = r˜s. Now we have s˜pb = −˜ sqa = −˜ rsa = r˜rb. Since b = 0 would imply the contradiction a = 0 we can infer that s˜p = r˜r, which implies (q, p) = (s, r) in S −1 R.

˜a,b := Na,b ∩ (S × R) is a superset of Cs,r consisting of representations of (s, r), which Thus N immediately leads to the algorithm SyzCancel. Algorithm 5: SyzCancel Input: A left fraction (s, r) ∈ S −1 R. Output: A set of representations of (s, r) containing Cs,r . 1 begin    T 2 compute M := RSyz( s r ) = { a b ∈ R2×1 | sa + rb = 0};  T 3 choose any non-zero a b ∈ M ;  T   4 compute N := LSyz( a b ) = { q p ∈ R1×2 | qa + pb = 0}; ˜ := N ∩ (S × R); 5 compute N ˜ 6 return N ; 7 end

6.5.2. Factorization-based canceling Since G-algebras are finite factorization domains ([3]) there are only finitely many factorizations of the denominator s. Thus we can compute Cs,r as follows: 1. Set M := ∅. 2. Compute all factorizations of s ∈ S ( R of the form s = fi si , where fi is irreducible, si a nonunit and i ∈ I, where I is a suitable index set for keeping track of these different factorizations. If I = ∅ return {(s, r)}. 3. Compute the index set J := {i ∈ I | ∃ri ∈ R : r = fi ri }, where ri can be obtained by right division: j ∈ J if and only if the right normal form rightNF(r, fj ) = 0. This can only be the case if tdeg(r) ≥ tdeg(fj ). If J = ∅ return {(s, r)}. 4. For every j ∈ J apply the procedure recursively to (sj , rj ) and add the results to M . After finitely many steps we obtain a list of all fully canceled representations of (s, r). Apart from (s, r) itself they all have denominators with total degree strictly smaller than tdeg(s), since tdeg(si ) = tdeg(s) − tdeg(fi ) ≤ tdeg(s) − 1. Still there can be several representatives with minimal total degree of the denominator:

18

Example 6.5. Consider again the localization LSat(V )−1 A1 from Section 4. Then (x2 , x∂ − 1) and (x∂ + 2, ∂ 2 ) represent the same fraction in LSat(V )−1 A1 since (x2 , x∂ − 1) = (∂x2 , ∂(x∂ − 1)) = (x(x∂ + 2), x∂ 2 ) = (x∂ + 2, ∂ 2 ). Both denominators have total degree 2 and cannot be canceled further. 7. Implementation In this section we outline the structure of olga.lib3 (short for “Ore localization in G-algebras”), our implementation of the algorithms developed above in the computer algebra system Singular:Plural. 7.1. Setting, conventions and restrictions For now olga.lib can perform computations in the following situations: • For monoidal localizations, consider a G-algebra A generated by the variables x1 , . . . , xn and let 1 ≤ k ≤ n such that R := K[x1 , . . . , xk ] is a commutative polynomial subring of A. Further, let g1 , . . . , gt ∈ R \ {0} such that S := [g1 , . . . , gt ] is a left Ore set in A. • Geometric localizations are only implemented for Weyl-like algebras A, consisting of 2n variables, where the first n variables x1 , . . . , xn generate a commutative polynomial subring R := K[x1 , . . . , xn ] of A: let p be a prime ideal in R ( A and set S := R \ p. • For rational localizations, consider a G-algebra A generated by the variables x1 , . . . , xn and let 1 ≤ i1 < . . . < ik ≤ n such that xi1 , . . . , xik generate a sub-G-algebra B of A and S := B \ {0} is a left Ore set in A. In any of these cases we can perform basic arithmetic in the localization at S constructively. Remark 7.1. In the monoidal case, the restriction for g1 , . . . , gt to be contained in a commutative polynomial ring is due to the existence of a unique factorization into irreducible elements there, which easily allows to check whether a given element s is contained in S or not. For the computation of the left Ore condition it suffices if the gi commute pairwise, see Lemma 5.3. All computations will actually be carried out in the localization at [g], where g is the square-free part of g1 · . . . · gt , which is isomorphic to the localization at S = [g1 , . . . , gt ] again by Lemma 5.3. Remark 7.2. In the rational case we also need the existence of an elimination ordering for the variables not indexed by i1 , . . . , ik to compute the intersection of ker(ϕs,r ) with the subalgebra B. This technical condition is satisfied in many applications, especially in the transition from polynomial to rational coefficients in the setting of linear functional operators. Total rational localizations, that is, computations in the quotient field of a G-algebra, also satisfy this condition since no elimination is required. 3 The latest version can be found at www.math.rwth-aachen.de/~Johannes.Hoffmann/singular.html and will also be included in a later version of Singular.

19

Remark 7.3. While we know that geometric localization is well-defined at any prime ideal p in the Weyl setting, in the other situations we have no automatic way to check if the given input indeed represents a left Ore set in A. If it is not left Ore, then the behaviour of the algorithms is unspecified: computations may yield a plausible result or fail with an error. Remark 7.4. Computing a left representation from a right representation is based on the computation of a left Ore condition. Analogously, computing a right representation from a left representation requires the right Ore property. If the respective Ore property does not hold, the corresponding algorithms might fail. 7.2. Data structure A non-commutative fraction x is represented by a vector [s, r, p, t] with entries of type poly, where (s, r) = s−1 r is a representation of x as a left fraction, while (p, t) = pt−1 is a representation of x as a right fraction. If s = 0 or t = 0 then the corresponding representation is considered as not yet known. If both are zero, the fraction is not valid. If both s and t are non-zero, the two representations have to agree, that is, rt = sp. A vector adhering to this specifications will be called a fraction vector below. To interpret x in the context of a localization we have to specify a localization type, which is an int with value 0 for monoidal, 1 for geometric and 2 for rational localization, as well as some localization data which is stored in an object of the universal type def to accommodate the different settings: • Monoidal: list g1 , . . . , gt with entries of type poly. • Geometric: ideal p in K[x1 , . . . , xn ]. • Rational: intvec containing i1 , . . . , ik . 7.3. Procedures Apart from some auxiliary functions, olga.lib contains the following procedures, which require two parameters specifying a left Ore S set via an int locType and a def locData as described in the section above. If they are not mentioned explicitly they have to be appended at the end of the parameter lists. 7.3.1. ore(poly s, poly r, int locType, def locData, int rightOre) If rightOre is set to 0, computes (˜ s, r˜, J), where s˜ ∈ S and r˜ ∈ R satisfy s˜r = r˜s and J is an ideal describing all possible choices for s˜ as specified in Section 5. If rightOre is set to 1, computes the right-sided analogue. This procedure will be replaced by two separate functions leftOre and rightOre in future releases. 7.3.2. convertRightToLeftFraction(vector v) Computes a right representation of the left fraction v. 7.3.3. convertLeftToRightFraction(vector) 7.3.4. addLeftFractions(vector a, vector b) 7.3.5. multiplyLeftFractions(vector a, vector b) 7.3.6. areEqualLeftFractions(vector a, vector b) 7.3.7. isInS(poly p) Checks if p is contained in S. 20

7.3.8. isInvertibleLeftFraction(vector v) Checks if v is invertible (see Section 6.4 for the interpretation of the result). 7.3.9. invertLeftFraction(vector v) Returns the inverse of v if v is invertible according to isInvertibleLeftFraction. 7.3.10. cancelLeftFraction(vector v) Performs steps to find an “easier” representation of v. 7.3.11. reduceLeftFraction(vector a, vector b) Only for rational localizations: performs a Gröbner-like reduction step to reduce a with respect to b. 7.4. Examples The first example demonstrates a left-to-right conversion in the second rational q-shift algebra A, which is Q(q)(x, y)hQx , Qy | F i with the set of relations (cf. also Example 3.6) F = {Qx g(x, y) = g(qx, y)Qx , Qy g(x, y) = g(x, qy)Qy , Qy Qx = Qx Qy | g(x, y) ∈ Q(q)(x, y)\Q(q)}. LIB "olga.lib"; ring Q = (0,q),(x,y,Qx,Qy),dp; // comm. polynomial ring matrix C[4][4] = UpOneMatrix(4); // sets non-comm. C[1,3] = q; C[2,4] = q; // relations def ncQ = nc_algebra(C,0); // creates A from Q setring ncQ; intvec v = 1,2; poly f = Qx+Qy; poly g = x^2+1; vector frac = [g,f,0,0]; vector result = convertLeftToRightFraction(frac,2,v); Now result contains the left representation (x2 + 1)−1 (Qx + Qy ) of frac as well as its newly computed right representation (q 4 x2 Qx + x2 Qy + q 2 Qy ) · (x4 + (q 2 + 1)x2 + q 2 )−1 . We can convince ourselves that the two representations are equal and that the right denominator of result is contained in S: f * result[4] == g * result[3]; -> 1 isInS(result[4],2,v); -> 1 The second example consists of the addition of two left fractions in various localizations of the second Weyl algebra A2 = Qhx, y, ∂x , ∂y | F i, where the set of relations F is as in Example 3.6 : LIB "olga.lib"; ring R = 0,(x,y,dx,dy),dp; // comm. polynomial ring def W = Weyl(); setring W; // creates A_2 from R poly g1 = x+3; poly g2 = x*y+y; 21

list L = g1,g2; frac1 = [g1,dx,0,0]; frac2 = [g2,dy,0,0]; vector resm = addLeftFractions(frac1,frac2,0,L); Here, resm has left denominator x3 y + 7x2 y + 15xy + 9y and left numerator x2 y∂x + 4xy∂x + x ∂y + 3y∂x + 6x∂y + 9∂y as a fraction in the monoidal localization of A2 at S = [x + 3, xy + y]. 2

ideal p = y-3; vector resg = addLeftFractions(frac1,frac2,1,p); resg contains (x2 y+4xy+3y)−1 (xy∂x +y∂x +x∂y +3∂y ) and belongs to the geometric localization of A2 at S = Q[x, y] \ hy − 3i. intvec rat = 2,4; frac1 = [y+3,dx,0,0]; frac2 = [dy-1,x,0,0]; vector resr = addLeftFractions(frac1,frac2,2,rat); Lastly, resr has left denominator y∂y2 −2y∂y +3∂y2 +y−4∂y +1 and left numerator x2 y∂y +∂x ∂y2 − xy+3x∂y −2∂x ∂y −x+∂x , living in the rational localization of A2 at S = Qhy, ∂y | ∂y y = y∂y +1i\{0}. Lastly, resr is given by (y∂y2 − 2y∂y + 3∂y2 + y − 4∂y + 1, x2 y∂y + ∂x ∂y2 − xy + 3x∂y − 2∂x ∂y − x + ∂x ), it is an element of the rational localization of A2 at S = Qhy, ∂y | ∂y y = y∂y + 1i \ {0}, which can be written as Quot(Qhy, ∂y | ∂y y = y∂y + 1i)hx, ∂x | ∂x x = x∂x + 1i, a polynomial Weyl algebra in variables {x, ∂x } over the quotient field of a Weyl algebra in {y, ∂y }. 8. Conclusion and future work The algorithmic framework presented here is based on a constructive approach that strives for broad generality. At a very general level we face the major problem of intersecting a left ideal with a submonoid S of R. We are not aware whether this problem is decidable in general. Nevertheless, we have proposed solutions for three application-inspired situations where S has additional structure, but even there some restrictions still apply. This should not be considered as a failure of the approach, but rather as a hint at the high intrinsic complexity of the problem. The proposed framework is easily expandable to include other types of left Ore sets S ( R provided the following two problems can be solved algorithmically: 1. the submonoid membership problem (i. e. whether r ∈ S for a given r ∈ R), 2. the intersection of a left ideal in R with a submonoid S. Apart from overcoming the current restrictions already mentioned throughout the text, we are working on the following: The section about the left saturation closure of multiplicatively closed sets is only a special case of a more general notion which also includes the important concept of local closure of submodules, such as the celebrated Weyl closure in D-module theory. Utilizing the ability to create user-defined data types introduced in Singular from version 4 on, we are working on an object-oriented interface for olga.lib to improve usability. To this end, we also intend to turn olga.lib into a true sandbox environment for all computations associated with Ore localization of G-algebras. 22

9. Acknowledgements The authors are very grateful to Daniel Andres, Vladimir Bavula, Burcin Erocal, Christoph Koutschan and Oleksander Motsak for discussions on the subject, even if some of these have happened a couple of years ago. Furthermore, we would like to thank the referees for their helpful suggestions. The second author is grateful to the transregional collaborative research centre SFBTRR 195 “Symbolic Tools in Mathematics and their Application” of the German DFG for partial financial support. References [1] Sergei A. Abramov, Ha Q. Le, and Ziming Li. Oretools: a computer algebra library for univariate Ore polynomial rings. Technical report, University of Waterloo, 2003. Technical Report CS-2003-12. [2] Joachim Apel and Wolfgang Lassner. An extension of Buchberger’s algorithm and calculations in enveloping fields of Lie algebras. J. Symb. Comp., 6(2-3):361–370, 1988. [3] Jason P. Bell, Albert Heinle, and Viktor Levandovskyy. On noncommutative finite factorization domains. Trans. Amer. Math. Soc., 369:2675–2695, 2016. [4] Jose Bueso, Jose Gómez-Torrecillas, and Alain Verschoren. Algorithmic methods in noncommutative algebra. Applications to quantum groups. Kluwer Academic Publishers, 2003. [5] Frédéric Chyzak and Bruno Salvy. Non–commutative elimination in Ore algebras proves multivariate identities. J. Symb. Comp., 26(2):187–227, 1998. [6] Mark Giesbrecht, Albert Heinle, and Viktor Levandovskyy. Factoring linear differential operators in n variables. J. Symb. Comp., 75:127–148, 2016. [7] Gert-Martin Greuel, Viktor Levandovskyy, Oleksander Motsak, and Hans Schönemann. Plural. A Singular 4-1-0 Subsystem for Computations with Non-commutative Polynomial Algebras. Centre for Computer Algebra, TU Kaiserslautern, 2016. [8] Gert-Martin Greuel and Gerhard Pfister. A SINGULAR Introduction to Commutative Algebra. Springer, 2nd edition, 2008. [9] Dmitry Grigor’ev. Complexity of factoring and calculating the GCD of linear ordinary differential operators. J. Symb. Comp., 10(1):7 – 37, 1990. [10] Johannes Hoffmann and Viktor Levandovskyy. A constructive approach to arithmetics in Ore localizations. In Proc. ISSAC’17, pages 197–204. ACM Press, 2017. [11] Abdelilah Kandri-Rody and Volker Weispfenning. Non-commutative gröbner bases in algebras of solvable type. J. Symb. Comp., 9(1):1–26, 1990. [12] Manuel Kauers, Maximilian Jaroschek, and Fredrik Johansson. Ore polynomials in Sage, 2013. [13] Christoph Koutschan. HolonomicFunctions (user’s guide). Technical report, University of Linz, 2010. RISC Report Series No. 10-01.

23

[14] Günter R. Krause and Thomas H. Lenagan. Growth of Algebras and Gelfand-Kirillov Dimension, volume 22 of Graduate studies in mathematics. American Mathematical Society, revised edition, 2000. [15] Heinz Kredel. Solvable polynomial rings. Shaker, 1993. [16] Heinz Kredel. Parametric solvable polynomial rings and applications. In Vladimir P. Gerdt, Wolfram Koepf, Werner M. Seiler, and Evgenii V. Vorozhtsov, editors, Proc. CASC’15, pages 275–291, Cham, 2015. Springer International Publishing. [17] Heinz Kredel. The java algebra system (jas)., since 2000. [18] Viktor Levandovskyy. Non-commutative Computer algebra for polynomial algebras: Gröbner bases, applications and implementation. Dissertation, Universität Kaiserslautern, 2005. [19] Viktor Levandovskyy. Intersection of ideals with non-commutative subalgebras. In J.-G. Dumas, editor, Proc. ISSAC’06, pages 212–219. ACM Press, 2006. [20] Viktor Levandovskyy and Hans Schönemann. Plural - a computer algebra system for noncommutative polynomial algebras. In Proc. ISSAC’03, pages 176–183. ACM Press, 2003. [21] Leonid Makar-Limanov. The skew field of fractions of the weyl algebra contains a free noncommutative subalgebra. Communications in Algebra, 11(17):2003–2006, 1983. [22] John C. McConnell and J. Chris Robson. Noncommutative Noetherian Rings, volume 30 of Graduate studies in mathematics. American Mathematical Society, 2001. [23] Øystein Ore. Linear equations in non-commutative fields. Annals of Mathematics, 32(3):463– 477, 1931. [24] Øystein Ore. Theory of non-commutative polynomials. Annals of Mathematics, 34(3):480–508, 1933. [25] Joris van der Hoeven. On the complexity of skew arithmetic. Applicable Algebra in Engineering, Communication and Computing, 27(2):105–122, 2016.

24