functional metrics: problems and possible solutions. - CiteSeerX

10 downloads 165028 Views 37KB Size Report
As a matter of fact, in almost all business environments, metrics activities ... One possible solution to the problem of estimating Function Points is that of using the ...
FUNCTIONAL METRICS: PROBLEMS AND POSSIBLE SOLUTIONS. Roberto Meli D.P.O. Srl – via Flaminia, 217 - 00196 Roma (Italy) E-mail: [email protected] http://web.tin.it/dpo

ABSTRACT1 Functional metrics (especially IFPUG Function Point and Mark II Function Point) are gaining dominance over "technological" type metrics (Lines of Code). Although this takes place quite rapidly from the "cultural" standpoint, on a practical level the process is much slower due to the large quantity of code measurements already present in any company, and given the scepticism with which functional metrics are received in some technical environments. The advantages of the first approach over the second one are, by now, well known. There are, indeed, some problems regarding functional metrics that need to be addressed by the technical community. This paper aims to highlight the major drawbacks of IFPUG Function Point metrics and discuss some possible solutions.

1. INTRODUCTION 2 In the field of software measurement, Lines of Code have long had the leading role in measuring an application’s size. This role was certainly not undeserved, given that a software program perceived as large, important, or complex - when the chiefly used programming language was Cobol or Assembler – most likely was considerably large in terms of Lines of Code (LOC), and vice versa. Then software began to be developed in increasingly involved ways and environments (at least, so the story goes) and Lines of Code have become increasingly difficult to identify and therefore to measure, transforming themselves, through Object Oriented environments and Web application, into “metaphysical” concepts. While this was taking place, a new way of thinking took root at the end of the 1970s: if software is of use for something and to somebody - otherwise, why produce it? - the size of a program must be measured from the user’s point of view and by valuing objects that have a direct meaning for the user. Now we were getting somewhere! This was a Copernican revolution that saw the centre of the universe shift from the white-gowned data processing centre technicians, with their cryptic language for initiates, to the users of the software, who want systems to support them in their main business, and not to force them to learn abstruse, complicated actions in order to store and use their own data! This revolution began with Albrecht’s introduction of Function Points. And although other works were going in this direction, none did so with the pragmatism and simplicity that set apart the work on Function Points. Function Points were then developed over time, becoming an object of standardization, by a group of international interest: the International Function Point Users Group. Sooner or later, all religions generate divisions. This is what happened to Function Points which, after the advent of Charles R. Symons [1], underwent the schism that was given the This work continues and supplements some of the author’s contributions to previous works cited in the references This paper assumes that the reader is familiar with the Function Point Analysis technique and the related models and concepts. Please refer to the bibliography if this is not the case. 1 2

name Mark II. In effect, it initially appeared that the Mark II Function Points were only to be a quicker way to estimate IFPUG Function Points, but it is now clear that MKII is a fullblown metrics system alternative to the IFPUG one. Both metrics systems are intended to measure the same “functional” aspect of a software application, and should therefore be proportional to - and convertible into - one another. But since it is difficult to grasp what exactly an application’s functional aspect is, here it is shown how this proportion is a difficult, if not desperate, undertaking. In our opinion, Mark II Function Points constitute a metrics system with a conceptual model incompatible with IFPUG FPs and therefore they cannot be compared. In fact, the basic objects measured are of a different nature. In the case of Mark II FPs, the elementary process derives from Input, Processing and Output logic according to which each elementary process has an input, processing and output component and refers to logical files. According to IFPUG FPs, an elementary process must have one single priority: either input or output. Even Inquiry, with both natures, does not allow for any possible algorithm, but merely data retrieval. In addition the IFPUG technique considers Logical Files as independent objects to be counted and not only referred. Since the elementary objects focused on are different, it is impossible to compare their results and it is meaningless to work to find an equivalence between the two different scales. In this work, we shall consider only IFPUG 4.0 Function Point metrics since they are the most widespread throughout the world. So, why are functional metrics useful? Because they meet at least two fundamental needs experienced by organizations: 1. measuring software on a "business value" basis, from the user's point of view. 2. obtaining a measurement as independent as possible from the productive contexts in which the same application is developed and utilized. A measurement responding to the first need works to facilitate the dialogue between the technician and the user or between the supplier and the client. It allows them to concentrate on the central aspects of the supply, i.e., on "what" the user can do with a given software (inputs, outputs, algorithms and data stored), rather than on "how" to produce it from a technical standpoint. The second need is linked to the need to compare, from a technological standpoint, different solutions, while aiming at the same business objectives. The purpose is to obtain indirect indications of the productivity of a given development environment or of the quality of a specific final product. Another important objective that we generally seek to achieve using size metrics (functional or non-functional) is that of: 3. helping to forecast effort, cost, and time to develop a new project or maintain an existing application. The third need is of a management nature, and arises from the need to plan the resources involved in developing or maintaining a software application with a higher level of certainty, rather than mere intuition. Unfortunately, the first two objectives seem to be incompatible with the third one. If we need to forecast effort starting from the functional size of the software and we wish the functional size to be independent from technologies and other productivity factors, we will end up with an effort estimate independent of productivity! Instead, we need a real effort estimate that 2

takes into consideration actual team capability, technologies used, hardware architectures, etc. The solution is to consider that a relationship of functional dependence surely exists between an application’s development effort and its size. However, this relationship involves innumerable productive variables that sometimes rise from their supporting roles to take centre stage. Some of these variables are: • • • • • • • • • •

working methods case tools programming languages technological platforms team experience final critical nature and reliability complexity and innovation of the problem to be solved, and of the software expected quality economic/organizational/competitive context nationality

It is commonly accepted that size is the so-called “primary driver” of the functional relationship. This means that size variations are those that determine, to a greater degree, the variations in the related effort. This is why we generally consider the auxiliary productivity variables through a Multiplicative Adjustment Factor that is applied after calculating the effort, starting from the size. The mathematical models linking size to effort are of the greatest variety, ranging from linear equations - straight lines - to exponential ones, via polynomials. The choice of one or the other is a matter of methodological conviction, intuition, experience, and at times (unfortunately), pure aesthetic taste. Independently gathered empirical data must then confirm or belie the productive models proposed. At this point, let us discuss some problems of Function Point metrics (IFPUG 4.0) [2], and the possible solutions proposed.

2. Are Function Points non-rigorous metrics? The fundamental thesis against FP technique holds that the counting procedure and result features do not allow it to be validated with respect to the formal framework of the measurement theory [3]. Indeed, this point is true and cannot be rejected. What follows is not a formal validation of FP technique, rather a call to pragmatism. We think that the real problem is that the software itself does not show simple, evident features useful to the measurability of its own technical dimensions. A software is not a liquid or a solid body whose "volume" and "mass" can be detected and measured. To date, we have yet to reach a common definition of what software even is. How is it possible that a number attributed to something of such an elusive nature can have its own formal characteristics? Indeed, formal theories applied to software have quite often proven of little use to the business world. How many organizations actually use formal techniques and their corresponding theorems to prove the correctness of each program? In practice, what we usually do is look for all possible "test cases," in the hope that the actual use of the software created will not present situations too different from those conceived when tested. The software that rules the world nowadays was created through too little industrial and automated processes to be subject to the laws of theoretical or applied mathematics. 3

Additionally, in the case of the "size" of a software it is very difficult to agree upon a definition of the attribute to be measured; therefore, when looking for a size measurement we have to proceed with a vague image of what we think a software to be. For example, as far as Lines of Code are concerned, once we have agreed upon what a Line of Code is, are we sure that we know what they actually measure? Do they measure programmer’s attitude, tools used to program, functionalities for the user, megabytes stored in a computer memory, or what else? Software applications do not have a visible "volume" or "size" allowing them to be arbitrarily dismantled and analysed, according to their own constituent elements. Taking one element off the program can affect the proper functioning of the whole system. Therefore, applications cannot be put on an ordinal scale with respect to these attributes, since they are not homogeneus entities, and the very meaning of "volume" or "size" is not at all clear. Not only does the mathematical operation of the difference between two applications have no sense, but adding them together as the result of combining the two applications could result in a software incompatible with itself, non-functioning, and therefore of no usefulness. Thus, we have to accept the idea of imperfection and lack of rigor in anything related to a product with such a low level of definition, as software is today. In this field at present, the more practical engineering approach is better than the mathematical one. However, this bears no particular negative consequences, for the following reasons: •

Unlike what occurs in other disciplines, software measurement does not affect the activity of the product construction as much as it affects its management and evaluation. In other words, while the mixture of elements must be measured to obtain good-quality concrete and build a solid and reliable bridge, the same does not hold true for software, since in order to create a program it is not necessary to measure its dimensions. As a matter of fact, in almost all business environments, metrics activities tend to be considered as a management overhead, with no immediate return for those who created the program. Instead, measurement is necessary if we wish to determine the dimension of a working group on the basis of the activities to be developed, or in evaluating the application’s fault density, and so on. Thus, in general it is especially related to the purposes regarding the management of the production or business process. These purposes are quite far from mathematical rigour.



Even if it is impossible to agree upon an exact definition of a software and its "dimensions," "pseudo"-measurements of some use from the practical standpoint can still be detected.



The essential point is that, given a body of rules to associate a measure to an application, the measurement has to be repeatable and independent from the performer.



If the number obtained from applying the aforementioned rules proves to be empirically correlated to some software phenomena we wish to forecast and control (quantity of operations and/or data the user can utilize; effort, duration and cost of development, etc.), why should we refuse it only because the measure is not formally validated?

To conclude this brief and certainly uncomprehensive analysis, we wish to assert that Function Point measurement sufficiently meets the requirements just referred to. Therefore, this is a case of "pseudo-measurement" and it is useful to employ it provided that new and

4

more advanced metrics tools are available, even if they still seem quite distant due to the extremely primitive state of software engineering as a scientific discipline.

3. Are Function Points not early enough? As experts often state, one important advantage to be taken into account in considering a Function Point measurement is that it is possible to determine its value at an early stage of a software project, i.e. when detailed functional users' requirements of a business application are evident and available. Unfortunately, for the purposes of Project Management, this is not early enough, since the level of detail needed to apply IFPUG standard counting rules implies that a large portion of the project has already been seen through (Functional Specification accounts for 15 to 40 % of the total work effort). Otherwise, it would be impossible to identify External Inputs, External Outputs, External Inquiries, Internal Logical Files, and External Interface Files without running into significant evaluation errors. In fact, if we consider the importance of estimation with respect to the management of the project, we end up with a curious and perhaps paradoxical phenomenon: measurement is very useful when we do not have enough elements to obtain it (Feasibility Study), but when we can identify it with absolute accuracy (just before the final product is ready) it is no longer necessary (at least for the purposes of predicting effort). This means that we need to have at our disposal an FP measurement value estimation method that can already be used after a carefully-produced Feasibility Study. This document has to roughly define what is going to be subjected in the end to a more in-depth detailed analysis, which can in turn make real FP counting possible. In fact, experts are increasingly using the term "counting" with regard to using IFPUG standard rules for identifying FP values, and "estimation" for all alternative techniques for forecasting the same FP values. To clarify the above, it should be kept in mind that a skilled FP analyst may be able to assign an FP measurement to an application simply by guesswork and observing its specifications and then placing faith in his/her intuition and experience. The final result of this process cannot be called "counting," even if, depending on the case and the person, it can be very close to the number obtainable by applying detailed IFPUG rules. One possible solution to the problem of estimating Function Points is that of using the Early Function Point technique described in [4]. The method is based on identifying software objects, like functionalities and data provided by the software under evaluation, at different levels of detail. The key factors are: macrofunctions, functions, microfunctions, functional primitives and data entities. Each of these objects may be assigned a set of Function Point values based on statistical tables. The approach presented has proved quite effective, providing a response within ± 10% of the real FP value in most cases. Other possibilities are “data driven” estimations in which an application’s total Function Point count are extrapolated from the number of logical files identified within the same application.

5

4. Do Function Points fail to consider the algorithmic aspect? Another common criticism of IFPUG Function Points is that it fails to sufficiently consider the algorithmic aspect of the elementary processes by basing calculation on the quantity of data treated (DET, RET, FTR) and not on the complexity of these treatments. This is why they generally are not considered suitable for measuring domains such as those of process control, of telecommunications software, or of real time. In our opinion, most of the problems derive from a failure to carefully apply the standard rules that may turn out to be useful on many unexpected occasions. However, there are most certainly cases in which this criticism is founded. For example, we may consider an elementary process that inverts a matrix or calculates an integral or transforms a bit-map image into a vectorial. In these cases, the few input and output data, and the consequently few FPs attributed to these processes, poorly represent the value that these processes have for the user of the system. If we could associate each EI, EO, and EQ with a standardized complexity parameter representing the particular type of algorithm mainly used in the elementary process, the FP contribution of that elementary process could be properly weighted. In this way, elementary processes having very few data items (DETs and FTRs) but with very complex treatment may make the proper contribution to the final FP count. Table 1 shows a taxonomy originally proposed in [5] and appropriately modified and used experimentally in the Extended Function Point technique [4] to introduce modifications to the FP count of each individual elementary process, EI, EO, and EQ, based on its intrinsic algorithmic complexity. The coefficient to the right is of the multiplicative type and increases or diminishes the basic value of the contribution of a particular elementary process identified by the analysis.

Table 1 data direction

0.3

amalgamation

0.6

separation

0.6

simple calculations

0.7

editing

0.8

generic process

1.0

data retrival

1.0

data verification

1.0

text manipulation

1.2

processes synchronization

1.5

As regards Capers Jones's Feature Points which, complex graphical display 1.8 besides transactions and files, also considers number complex computation 2.0 and type of the algorithms present in the application (but only at a global level), Extended Function Point allow the association of each elementary process with the prevalent type of algorithm and its relative degree of complexity. Therefore, this is a more detailed system of evaluation that only needs an international taxonomy table standardization to represent all the kinds of different algorithms known and their relative weights.

5. Are Function Points linked to the value for the user or to the cost of production? It is often stated that Function Points measure the functional size of a software application from the point of view of the experienced user, and at the same time that they are to be linked to the usefulness that the software has for this user, or its Use Value. This is like saying that the kilogram we use to measure bread is also a measure of the degree to which that quantity of bread can satisfy the consumer’s hunger. Does this property appear likely? Does the size grow along with the Use Value? And given the fact that, according to the utilitarian theory, each good has not one single value, but as many values as there are consumers (or users in this case) of the software, should we therefore have as many measurements in Function 6

Points as there are different points of view? To answer these questions, let us try to understand who the user is, and what the Use Value is for him or her. IFPUG 4.0 [2] standards do not give a clear, unequivocal definition of the User. In a number of points, however, it suggests a definition broadened to include the direct user, the indirect user, the hierarchical manager, the technical/operative user, etc. So the term Experienced User can be taken as an abstraction - a virtual figure who in reality is a composition of a number of different physical figures entitled to express requirements on the software project, and guided in this by an expert in analysis techniques. This offers a solution to the multiple user problem. For each application, there is one and only one experienced user, who is the composite of all the subjects indicated above. It is no accident that in the recently developed discipline of requirement engineering, point-of-view based approaches are winning the day. What then is the Use Value of the software for a subject such as the one described above: physically non-existent but all too demanding? An initial consideration regards the fact that each software project has functions that are more important than others. A function that makes it possible to control air traffic will not be as important as one that determines to which printer to direct the log-file of the day’s calls! If we make the simple supposition, instead, that all functions are democratically equal and that the functional size is linked more to the number of functions than to their intrinsic mission, the second element of subjectivity falls as well. This is exactly what occurs when, for example, Data Element Types (DET) and File Type Referenced (RET) are counted to determine the complexity of an External Input. If this approximation were acceptable, we could say that Function Points are linked to the quantity of things that can be done with a given piece of software, and that this corresponds with its Use Value. Are these two hypotheses that we have made acceptable? In our view, the answer is yes. We have no reason to think that these assumptions may introduce more problems than they solve, given that we are in the field of intangible goods where subjectivity is to some extent ineluctable. The problem with Function Points actually comes from another direction. As Function Points are presently defined, they appear more linked to the Labour Value than to usefulness for the User. If this were the case, we would have an intrinsically inconsistent metrics, which would purport to measure the software’s Use Value while actually being linked to its production costs, and therefore to the Labour Value. However, it is known that in a market economy, Use Value is not linked to production costs, except by chance. To return to our metaphor, it is as if we wished to measure bread not by the kilogram (a measurement relatively proportional to the ability of the good to satisfy the consumer’s hunger) but by hours of work needed to produce that piece of bread! With the variety offered by modern technology, it may happen that a certain quantity of bread may be produced in a half day, or even in one hour, while maintaining the same ability to satisfy hunger. A measurement of this kind would not be at all linked to the software’s aptitude to satisfy the consumer’s need, but only to the productive conditions. There are two main elements leading to a thesis such as the one described above: 1. IFPUG counting practices including the Value Adjustment Factor (VAF; VAFA; VAFB), which does not actually add a single elementary function to those required by the experienced user, but which - interestingly enough - represents more a way of

7

taking the development difficulties into account by increasing or decreasing the pure functional value by up to 35%, based on the greater or lesser production cost; 2. the composition of the relative weights of the elementary processes EI, EO, EQ, ILF, and EIF, which seem to have been assigned based more on the development difficulty related to the technologies used in the years in which Function Points came into being, than on the perception of usefulness for the experienced user. For example, why should an External Input weigh less than an External Output? Perhaps because with the technologies we were referring to at the time, it was easier to design and produce a data acquisition mask than a printout report. Today, it would likely be the other way around. Therefore, we need to lead Function Points to measure size or – in light of what has been stated – the Use Value of the software, abandoning any kind of mingling with production effort / costs. This is not to frighten all those who are looking to software metrics as tools to manage forecasts, projects, and contracts, and for whom the link between size and effort is of fundamental importance. In truth, they also have everything to gain from purifying the nonfunctional aspects of Function Point metrics since, should this come to pass, they may have stronger, more consistent models at their disposal, even for calculating economic variables. With regard to the composition of the weights assigned to the elementary processes, the work by Wittig, Morris, Finnie, and Rudolph [6] appears extremely promising for Function Point development, in introducing a formal methodological approach to defining User perceptions vis-à-vis the relative weights given to the factors underlying Function Point Analysis. This research effort goes precisely in the direction of completely separating EI, EO, EQ, ILF, and EIF weights from development difficulty, and linking them to the Use Value we have discussed earlier. As an initial result, we may conclude that the weights originated by the research cited are quite different from those included in the standard tables (this is an opinion different from that of the authors). The second thing we must do is to abandon use of the VAF (Value Adjustment Factor), at least as an element modifying the measurement of the application’s size, perhaps recovering it either as a qualifier (the higher the VAF, the better the application) or as an element modifying the value of the productivity that will influence production costs, but not the size of the application.

6. Do Function Points fail to take software reuse into account? Let us suppose that we have a project to develop an application that has already been counted as 500 FPs in accordance with IFPUG 4.0 standard practices. Let us also suppose that at this time, it was possible to recover from a previous project a set of software modules that have already been automatically developed and tested for a total of 100 FPs, and that 150 more FPs, ready for insertion into the software designed ad hoc, are obtained on the software components market. We will find in this case that approximately one half of the functionality provided for the application to be developed will be obtained with minimum, or even no effort. Lastly, let us suppose that a benchmarking database is fed with data supplied exclusively by organizations that develop the software entirely in house, with no reuse of any kind. If the average productivity for similar database projects were 10 FPs/person-month and we applied a simple formula that, for example, divides the 500 FPs by 10 FPs/pm, we would find that we need 50 person-months to develop the new project when, in all probability and considering the savings introduced by reuse, perhaps one half this figure would be sufficient. Waste is ensured by the well-known property by which a software project behaves like a gas, 8

which tends to occupy all the space it is given, thereby spreading itself out to absorb 50 person-months with not even a suspicion that half of this could have been avoided. This sample scenario is not hypothetical in the least; it will become more and more the norm in years to come, when the components market has developed like the packages market, and we will be able to obtain information systems by assembling what we may refer to as prefabricated parts, already prepared because they have been developed in house or externally - it matters little. A benchmarking database that fails to differentiate projects based on the percentage of reuse of the software employed for each case risks contamination by data that cannot be compared with one another. On the other hand, it is not practical to trust that average productivity is the result of a balanced composition of diversified levels of reuse, and that this smoothes out the problem’s sharp edges. Indeed, it is of no use to make an average of all the various levels of reuse as we do with other factors, because this parameter may assume too significant proportions on the individual project. On the other hand, the value of the productivity averages apart from reuse is useful as a macro-economic index of comparison between different organizational situations because, from the competitive standpoint, knowing how to exploit reuse is a factor of competitive advantage. The Counting Practices Committee at Gruppo Utenti Function Point Italia (GUFPI) has recently dealt with this question [7], stressing the arguments described hereunder. Page 2-2 of the CPM 4.0 manual states: “Function points measure software by quantifying its functionality provided to the user based primarily on logical design.” In effect, the functionality already in existence and incorporated - through the external acquisition of generalized packages or the household use of modules developed on other occasions - is still functionality requested and obtained by the user, and should therefore be counted as if developed from scratch for the purpose of obtaining the size of the application in FPs. This means that reuse is not a size-impacting factor, at least from the external standpoint of Use Value. However, it surely impacts the work to be performed - and therefore the consequent production cost -, perhaps rendering inapplicable the functional link between size and effort that we worked so hard to construct using the benchmarking database. How can we resolve this conflict between FPs provided to the user (all FPs) and those developed by the project (only some) that are useful for forecasting purposes? One possible approach to this problem is as follows: for each project, define two different measurements in Function Points: one connected to the external user view of the software, which corresponds with Function Points as they are currently defined, and the other connected to the administrative and productive needs of the software manufacturer, who wishes to find out which functionalities must be developed more or less from scratch, to be able forecast and assign only those resources needed and sufficient for developing the application. This new measurement may be called Developed Function Points (DFP). Developed Function Points will take into account only those functions that need to be developed entirely or in part, but not those that are effortlessly inherited. In this way, effort forecasting based on a historical productivity measured in DFPs per person/month will not be contaminated by the reuse phenomenon.

9

In practical terms, to determine the Developed Function Points starting from Function Points, we need only assign to each element (EI, EO, EQ, ILF, EIF) classified and evaluated in accordance with the standard contribution tables, a multiplying factor that assumes values from 0 to 1 based on the estimated savings introduced by reusing that particular element. Therefore, by adding the modified contributions and the new reuse coefficients in the usual manner, we will obtain the overall DFP count, which will be less than or equal to the FP count. In a benchmarking database, both measurements should be present in order to produce internal and external productivity models. An appropriately modified example from the IFPUG 4.0 Counting Practices Manual should illustrate the proposed method:

Transactional Function Types

Functional Complexity

UFP

Reuse

DUFP

FTRs

DETs

1

5

Low

3

Low/0.8

2.4

Add job information (screen input)

1

7

Low

3

None/1

3

Add job information (batch input)

2

6

Low

3

High/0.4

1.2

Correct suspended jobs

1

7

Low

3

Very H/0.2

0.6

Employee job assignment

3

7

High

6

All/0

0

Jobs with employees report

4

5

Average

5

Low/0.8

4

New dependent transactions to Benefits

1

5

Low

4

Very H/0.2

0.8

3

4

Low

4

Low/0.8

3.2

Employees by Assignment Duration Report

3

7

Average

5

Low/0.8

4

External Inquiries

In/Out

In/Out

List of retrieved data

2/1

2/3

Low

3

High/0.4

1.2

Drop-down list box

1/1

2/1

Low

3

High/0.4

1.2

Field level help

1/1

2/4

Low

3

None/1

3

External Inputs Assignment report definition

External Outputs

Notification message

This example yields 45 Unadjusted FPs and only 24.6 Developed Unadjusted FPs.(DUFP)

10

To standardize reuse types, the corresponding numerical percentages and the multiplicative adjustment values, research has been started, involving professionals from a number of different organizational situations, aimed at reducing as much as possible the level of subjectivity in attributing weights and coefficients. Results will be presented in a technical report when the experimentation phase is completed.

7. Conclusion In this work, we have sought to indicate some areas in which Function Point metrics standardized by IFPUG have been criticized, but some possible solutions as well. It may be hoped that these problems will be appropriately discussed to find ways to improve applicability, thereby further spreading an approach that has already proved of enormous use, but that must stay in step with the developments that the way of producing software is demonstrating now, and increasingly in the future.

8. References [1] Symons, C.R., “Software Sizing and Estimating Mk II FPA”, John Wiley & Sons, England, 1991 [2] IFPUG, “Function Point Counting Practices Manual, Release 4.0”, Westerville, Ohio, 1994. [3] Poels, G., “Why Function Points do not work”, Guide Share Europe Journal, August 1996 [4] Meli, R., “Early and Extended Function Point: a new method for Function Points estimation”, IFPUG - Fall Conference - September 15-19, 1997 - Scottsdale, Arizona USA. [5] De Marco, Tom, “Controlling Software Projects : management, measurement & estimation”, Prentice Hall, Englewood Cliffs, NJ, 1982 [6] Wittig, Morris, Finnie e Rudolph, “Formal Methodology to Establish Function Point Coefficients”, IFPUG, Fall Conference, Scottsdale, Arizona USA, September 15-19, 1997 [7]

Counting Practices Committee http://www.gufpi.com/cpc, 1997

GUFPI,

“Function

Point:

Linee

Guida

Italiane”,

11