Practices of High Maturity Organizations - Semantic Scholar

3 downloads 209 Views 97KB Size Report
Mar 11, 1999 - maturity software organizations, as measured by the Capability Maturity Model® for. Software (CMM. ®. ) .... software development plan updated. The supplier chose ...... Onboard Shuttle (OBS), Houston, TX. Peter Koester and ...
1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Practices of High Maturity Organizations Mark C. Paulk Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213

Abstract Over the last few years the Software Engineering Institute has participated in several workshops and site visits with maturity level 4 and 5 software organizations. This paper summarizes the lessons learned from those interactions with high maturity organizations, while preserving the anonymity of the organizations involved. Specific areas of interest include statistical process and quality control and product lines/families, but the observations cover a variety of engineering and management practices, including issues outside the scope of the Capability Maturity Model for Software. A survey was distributed to informally test the anecdotal observations about high maturity practices.

1.

Introduction

During the last several years, we have had the privilege of working with a number of high maturity software organizations, as measured by the Capability Maturity Model for Software (CMM ) [Paulk95], in workshops, conferences, assessments, and site visits. The Software Engineering Institute (SEI) has hosted workshops for Level 4 and 5 organizations, and we have been invited to participate in various company workshops on becoming Level 4. We have also visited a number of high maturity organizations, both informally and during assessments, and had the opportunity to examine their processes in some detail. While much of the specific information the SEI has learned is covered by non-disclosure agreements, we can discuss the lessons learned on good engineering and management practices in general terms, and, where the organizations involved have published papers on their processes, include references that provide greater detail. The purpose of this paper is, therefore, to summarize in a general way the techniques, methods, and lessons learned about getting to CMM levels 4 and 5. It does not, however, attempt to describe all good software engineering and management practices. Although these observations are specific to software organizations, I believe that these practices, or discipline-specific variants, are valuable to any high maturity organization in any engineering discipline. Even low



Capability Maturity Model and CMM are registered in the U.S. Patent and Trademark Office. Personal Software Process, PSP, Team Software Process, and TSP are service marks of Carnegie Mellon University. SM

1

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

maturity organizations should find it useful to consider these practices in beginning their process improvement journeys. It should be noted that some of these organizations achieved high maturity before the Software CMM was written. The CMM, which is summarized in Figure 1, encapsulates good engineering and management practices in a structured framework for process improvement, but the concepts it embodies should be familiar to anyone knowledgeable about Total Quality Management (TQM). For example, the Onboard Shuttle project and Boeing Space Transportation Systems both evolved high maturity processes before the CMM existed, although both have adopted it to guide their ongoing improvement efforts. They did what was necessary to produce the high quality products that they demanded of themselves. Level 5 Optimizing 4 Managed 3 Defined

2 Repeatable

1 Initial

Focus

Key Process Areas

Continual process improvement

Defect Prevention Technology Change Management Process Change Management Quantitative Process Management Software Quality Management

Product and process quality Engineering processes and organizational support

Organization Process Focus Organization Process Definition Training Program Integrated Software Management Software Product Engineering Intergroup Coordination Peer Reviews Project management Requirements Management processes Software Project Planning Software Project Tracking & Oversight Software Subcontract Management Software Quality Assurance Software Configuration Management Competent people and heroics

Figure 1. An overview of the Software CMM. The Software CMM is intended to cover a wide range of organizations. This paper covers practices from both large (over 500 people) and small (less than 50 people) organizations. It covers both development and maintenance practices, including shops that maintain third-party software and/or deal with 20-year-old legacy systems. The practices of both government contracting and commercial software organizations are discussed.

2.

Data Collected on High Maturity Practices

The data that is summarized in this paper comes from a variety of sources, including

2

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

published case studies, workshops, site visits, and assessments. Data provided under nondisclosure agreement is treated by the SEI as company confidential and has been sanitized for this paper. A workshop for level 4 and 5 organizations was held by the SEI in early 1996 with six high maturity organizations participating. The workshop results are summarized in Appendix A. In addition to this and similar CMM-related workshops hosted by the SEI, we have also participated in external workshops on level 4 practices. While workshops and site visits may provide useful insights into industry practices, they do not necessarily provide a good feel for the breadth of deployment of specific techniques across industry. To obtain a broader perspective on these high maturity techniques, an informal survey was distributed to maturity level 4 and 5 organizations. At this writing (December 1998), the SEI assessment database lists eighteen level 4 organizations and seven level 5 organizations who have reported their assessment results.1 Thirteen organizations responded to the survey, including six organizations appraised at level 4 and six appraised at level 5. One organization is included that has been assessed as having many level 4 and 5 practices in place, but whose last appraisal was for level 3. The results of the survey are summarized in Appendix B. For those interested in direct information on high maturity practices, a number of case studies have been published: § United Space Alliance (formerly IBM, Loral, and Lockheed Martin), Onboard Shuttle [Billings94, Fishman97, Krasner94, Paulk95 (chapter 6)] § Boeing Space Transportation Systems [Fowler97, Wigle97, Yamamura97] § Hughes Aircraft [Humphrey91, Willis98] § Motorola, Global System for Mobile Communication [Miller98] § Motorola, Government Electronics Division [Diaz97] § Motorola India Electronics Pvt Ltd [MIEL95] § Raytheon Electronic Systems, Software Engineering Laboratory [Dion93, Haley95, Haley96] § Oklahoma City Air Logistics Center [Butler95, Butler97]

3.

Common Practices of High Maturity Organizations

Although the emphasis of this paper is on the good engineering and management processes, it should also be noted that high maturity organizations typically have a broader scope of improvement concerns than just the CMM’s process issues. Some high maturity organizations, such as Onboard Shuttle and Boeing Space Transportation Systems, were doing process improvement long before the Software CMM was published [Yamamura97, Kimsey97]. Others, such as Motorola India, were started with one business objective being high process maturity [MIEL95].

1

A regularly updated maturity profile is available at www.sei.cmu.edu/sema/profile.html.

3

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

If there is a Total Quality Management (TQM) initiative within the company, the high maturity organization’s software process improvement program is explicitly aligned and coordinated with the TQM initiative. Aligning with TQM initiatives can be a challenge for high maturity organizations, however, when there are “10X” targets for improving. If the organization is already performing at Six Sigma levels, there is a major challenge in improving dramatically if only 15 defects were found after system testing in 2.2 MSLOC shipped last year! In such cases, the alignment may be conceptual rather than quantitative. Eleven out of thirteen of the organizations surveyed had ISO 9001 certification. Some commented that they began their process improvement efforts using ISO 9001, then shifted to the CMM after obtaining certification. Others obtained certification as a business requirement in their market. High maturity organizations address issues outside the scope of the CMM. Their improvement programs include a strong emphasis on automating the software process and addressing people issues. Processes, data collection, and statistical analysis are automated wherever practical. High maturity organizations generally emphasize openness, communication, and a commitment to quality and the customer at all levels. They appreciate the “peopleware issues” [DeMarco87, Constantine95, Weinberg94, Curtis95]. They encourage a process orientation in their staff. Worker empowerment and participation in process definition and improvement activities are real; process improvement is part of everyone’s job. Their strategic approach to quality management involves the linkage of quality with business goals and a focus on customer satisfaction and delight. There is a “quality culture” in high maturity organizations [Miller98]. Rewards and incentives are established for process improvement efforts, and worker empowerment and participation are more than just slogans. People believe in the process, and when mistakes happen (as they inevitably do in any human endeavor), the focus is on improving the process, not disciplining the people, who may only be the bearers of bad tidings. High maturity organizations recognize the importance of good staff. To quote one participant in the 1996 workshop, “Getting the right person into the right job on the project is still the most important aspect of project success. People are not plug-compatible. The expertise of individuals is critical. Process is an enabler; not a replacement.” Knowledge and skills are systematically cultivated in software engineering, management, interpersonal skills, and the application domain. Another workshop participant stated, “We have the philosophy of not assigning people to jobs they are not prepared for.” It was also observed that not everyone needs to have deep domain knowledge so long as the key lead engineers do, although at least one member of each team needs to have the right domain expertise to support the team effort.

4

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

High maturity organizations apply common sense in using the CMM. New projects are typically expected to address all 18 key process areas, but alternate implementations are used when appropriate. When there is a controversy, (strategic) business value is the foremost factor in resolving the situation; this sometimes means going against the “letter of the CMM” or customer desires, but doing so with data to support an informed decision. 3.1

Customer Interaction

It is very difficult to be a high maturity supplier if you have a low maturity customer (or low maturity suppliers or partners, in the case of strategic alliances, joint ventures, and virtual organizations). Data are a strong foundation for making decisions. At higher maturity, there is a better foundation and understanding to explain to customers why you do things a certain way. The customer is usually reasonable when presented with data, facts and frequent patterns of success, e.g., delivering on schedule and budget. The general philosophy of high maturity organizations is to proactively work with the customer to gain a mutual understanding of what will be done, which often means educating the customer on how the organization usually does things, with data and results to back up the methods. One high maturity organization provides the last two years of performance data in its proposals; although it is rarely the low bidder, consistent past performance provides credibility. In one instance, a long-term customer in a maintenance environment did not want the software development plan updated. The supplier chose to develop a separate plan that the organization uses – an example of doing the right process thing, even if the customer is uninterested (or perhaps opposed!). Another supplier, in a commercial environment, established a customer liaison engineer, separate from the systems engineer, to work the interface with the customer. The customer liaison engineer is usually on the customer's site, and change control boards include customer representatives. The survey indicates a general emphasis on proactively managing the evolution of the customer’s requirements via evolutionary and incremental life cycles [Diaz97]. Eleven of the thirteen organizations surveyed use incremental life cycles, and nine of the thirteen (with two piloting) use evolutionary life cycles. 3.2

Project Management

A practice in the CMM’s project management key process areas that requires significant interpretation for maintenance organizations is size estimating. Maintenance organizations are frequently funded as level-of-effort, and little business value may be derived from size estimating. Although size may be a useful measure for enhancements, for corrective maintenance “size” is likely to be “number of problem reports,” and the variability is likely to be quite high. All thirteen organizations in the survey use lines of code as a size measure (and two use function points also, with two more organizations piloting them), but comments pointed out that there are instances where size estimating is not of value for maintenance – schedules and budgets may be fixed a priori for maintenance projects, and the variable parameter is the functionality of the update package. It is also interesting to 5

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

note that only eight of the thirteen organizations surveyed use cost models in their planning process, and one maintenance organization had actively rejected them. High maturity organizations systematically manage risks. Eleven of the thirteen organizations surveyed do “systematic” risk management. We have detailed insight into both organizations that indicated they do not. Based on my personal observations, I would characterize both as a) operating in a high risk environment; b) having a profound knowledge of the risks associated with their (fairly specialized) application domains; and c) having internalized risk management so thoroughly – and systematically – into their processes that they no longer recognize how mature their risk management process is. The conclusion appears to be that systematic risk management is an intrinsic characteristic of high maturity, which is integrated into the process and internalized by the staff. In high maturity organizations, many traditional management responsibilities are delegated to specified roles in the process, e.g., progress tracking, day-to-day customer interfaces, and process improvement. Managers are freed to focus on longer-term strategic and operational issues. Managers can truly “manage by exception” in an open culture where the messenger is not shot and problems are identified and escalated appropriately and quickly. When roles are established, and responsibility and authority delegated, the people in the roles • speak for the organization • are empowered to make commitments that will be honored • are supported by their management 3.3

Measurement

High maturity organizations tend to have dedicated measurement people at the project and organization levels, although they may not be assigned full-time. They also encourage their customers to become actively involved in specifying measures and setting quality goals. Measurement in a high maturity organization can be characterized as: • driven by business goals, in the sense of the Goal Question Metric paradigm [Basili96] • standardized across the organization for common measures • tailored to the specific needs of the user • based on operational definitions that define how to collect consistent data • collected as close to the point of origin as possible • involve the active participation of the affected parties, including the customer Table 1 contains a “metric evaluation table,” inspired by Humphrey [Humphrey89] and refined by a high maturity organization as part of their measurement program for statistical process control. The organization uses the table in determining what measures to collect and identify the value they provide.

6

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Table 1. The Metric (or Measure) Evaluation Table Indicator of Process Performance Controllable Objective Timely Readily Available Represents Customer's View of Quality Customer-Required Represents End-User View of Quality Represents Senior Management's View of Quality Organization-Required

Represents Project Manager's View of Quality

Is the measure a good indicator of how well the process is performing? For example, an indicator of efficiency or effectiveness. Can the values for this measure be predictably changed by changing the process or how the process is implemented? Can the measurement be consistently reproduced by different people? Can data be collected and analyzed such that you can predict and/or control process performance? Is the data relatively easy and cost-effective to obtain? Is the measure one that the customer thinks is an important indicator of process and/or product quality? For example, an indicator of reliability. Is the measure one that the customer requires be reported? Is the measure one that the end-user thinks is an important indicator of process and/or product quality? For example, an indicator of usability. Is the measure one that senior management thinks is an important indicator of process and/or product quality? Is the measure one the organization requires be reported? That is, is it one of the common, standard measures defined for the organization? Is the measure one that the project manager thinks is an important indicator of process and/or product quality? For example, an indicator of progress.

Should data analysis be performed at the SEPG level or by a measurement group? While organizational staff members may have special expertise in exploratory data analysis, one organization found that getting root cause information out of the data is difficult at the organizational level. The organization eventually added a root cause analysis task as a part of the inspection process. Inspection participants look at what happened and what the data are telling them while the contextual information is fresh in their minds. The decision on whether organizational level actions are required is made based on the results of causal analyses in multiple inspections. Operationally, many projects use measurement to determine the allocation of resources for downstream processes. For example, Pareto analysis is likely to indicate the modules that will be most defect-prone in later phases and may provide a justification for re-designing a module. In many high maturity organizations, in addition to standard measures, project teams are empowered to define measures they believe are of value and to start or stop their use as they feel appropriate. They can develop low-level, non-reported measures unique to their needs, which will increase their process insight and understanding. One of the cultural barriers that maturing organizations must deal with is the expectation of senior managers that they should be looking at, and reacting to, the data on control charts. Managers and engineers have different needs for measurement and statistical

7

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

analyses. It is questionable whether managers obtain significant direct value from statistical process control. The management value is from knowing that the software processes are stable and under control – that the organization can achieve the goals within its process capability (and that the system must be changed to achieve goals beyond the current capability). The engineers, on the other hand, are able to use the measurement data and statistical tools to control the process, identify issues, and escalate problems to management for attention where appropriate. The use of measurement data for evaluating the performance of employees is an ongoing concern for high maturity organizations. Deming was a strong advocate of statistical techniques and strongly averse to performance evaluations [Deming86]. Some high maturity organizations have adopted “360 performance evaluations,” where subordinates review their superiors as part of the performance review system. Unless a “perfect” measurement system is defined that covers all critical performance parameters objectively, measurement is likely to cause dysfunctional behavior if there is any chance of the data being used against people [Austin96]. Much measurement could be performed at the individual or team level without being reported up the management chain, but no instances of this separation of concerns have been reported in high maturity organizations yet. Trust and communication seem characteristic of high maturity organizations, but a trusting culture is vulnerable to a single short-sighted abuse. In one example, a manager pushed too hard on an individual basis for individual engineers to reduce defects. As a result, engineers pre-inspected their work products to reduce errors in the official inspections. In one sense this could be considered a positive action, but the data from the pre-inspections was not recorded in the measurement database – which injected a significant bias into analysis! A related concern about measurement is to what degree the Hawthorne Effect drives behavior. One organization at the 1996 workshop reported a case where code reviews found 40% of defects within the phase where they were injected. As soon as phase containment effectiveness was made visible, it increased to 85% and remained there! In a sense, perhaps, if change is both systematic and positive, the question of whether the Hawthorne effect is occurring is of little import. 3.4

Product and Process Assurance

Software Quality Assurance (SQA) is perhaps the most controversial key process area in the CMM. There are passionately held, opposing opinions on whether there should be an independent SQA organization or whether the SQA function should be “built into the process” as part of the quality culture that should be expected of high maturity organizations. Questions in the survey were intended to probe this issue, but the surprising result was that all thirteen organizations indicated that they use an independent SQA group and

8

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

embed the SQA function in the process. Based on my personal observations, the independent SQA group for these organizations is usually comparatively small, and it samples rather than providing 100% process and product coverage. The SQA group uses the level 4 process and product data to identify high-leverage opportunities for auditing. As one reviewer commented, “I was convinced several years ago that SQA would disappear as an organization matures and software quality functions would be embedded into the software engineering functions. I viewed the CMM to have a problem, because SQA as an independent function was always required even as the organization matured (since it was a Level 2 KPA). However, I changed my opinion after our experience … When SQA makes the appropriate transition to process audits in addition to product audits, they can become a valuable team member. We still make mistakes in a high maturity organization, and it is still valuable to have a backup to catch those. Their role in process audits is very valuable as it separates that function from the SEPG so that the SEPG does not appear to be the policeman.” Some high maturity organizations separate process and product assurance. The “SQA group” may focus on process monitoring, while product assurance is built into the peer reviews and configuration management system. Eleven of the thirteen organizations surveyed use independent test groups. 3.5

Process Definition and Deployment

One of the major culture shifts in achieving high maturity for many organizations is transferring process ownership to practitioners and achieving an appropriate balance between control and empowerment [Simons95]. Process improvement is controlled by process ownership teams that are staffed by practitioners. Process improvement activities are coordinated by a project SEPG but not directly performed by the SEPG. Mistakes are studied for what they can teach, not who to blame. The volume of process documentation seems to max out at level 3. High maturity organizations provide minimal, but useful, process descriptions that can be used by both very experienced and novice professionals, and the process is highly automated. Standards, procedures, and checklists may be detailed, but their use is localized to specific times – the classic design principles of information hiding, abstraction, etc., are observed. Detailed process knowledge, such as a novice would need, tends to be embedded in training materials, mentors, tools, and/or templates. All of the organizations surveyed use the Web to deploy their process assets. None of the high maturity organizations surveyed are using IDEF0 or SADT for process definition.

9

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

3.6

Training and Mentoring

Training in high maturity organizations can go to extremes. Some organizations have mandatory training for new hires that can last up to eight weeks [Dymond96], plus mandatory continuing education requirements. This training includes internally and externally developed training materials, awareness programs, and workshops. Other organizations rely heavily on formal mentoring programs to impart skills and knowledge [Paulk95 (chapter 6)]. Most high maturity organizations have both training and mentoring programs. Training is tailored to the needs and experience of the students, with an emphasis on training in the application domain. High maturity organizations recognize the criticality of domain expertise; the organization may even be structured into product lines. Teams will usually have staffing requirements for expertise in the application domain as well as software engineering techniques and technologies. There may be significant time gaps, however, between course offerings before enough students need training to justify a course offering. These gaps, along with the intrinsic effectiveness of a good mentoring relationship, can lead to a formal mentoring program. Common characteristics of a “formal” mentoring program: § Mentors are knowledgeable and respected. § Mentors are trained in how to function effectively in the mentoring relationship. § The expectations for the mentor and the mentored are explicitly identified. § The mentoring relationship lasts for an extended period of time, typically about one year. § Mentor and mentored are physically close together, perhaps sharing an office. § Mentoring is tracked by management. § Mentoring skill is part of the performance evaluation criteria for the mentor. § Causal analysis may lead back to a breakdown in the mentoring process as the root cause of a defect. 3.7

Integrated Product and Process Development

Integrated product and process development (IPPD), also known as concurrent engineering or simultaneous engineering, is a major acquisition reform initiative within the U.S. Department of Defense (see www.acq.osd.mil/te/survey/table_of_contents.html). IPPD’s philosophy can be summarized as the most effective project in achieving business goals will be the one that breaks down organizational barriers with a cross-functional team and a global, systems perspective. Although IPPD is intuitively attractive and has been demonstrated to be quite powerful, it does involve hard cultural changes for most organizations. Only seven of the thirteen organizations surveyed are using integrated product and process development.

10

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

3.8

Peer Reviews

Much of the data used at level 4 comes from peer reviews. High maturity organizations tend to use inspections, the most formal variant of peer reviews, because of their emphasis on collecting data and the associated process rigor. One of the 1996 workshop participants reported observing problems with code defect densities in 1991. Part of the problem was the quality of incoming work products because of poor phase containment effectiveness. Their peer reviews were not achieving the effectiveness reported for Faganstyle inspections [Fagan86]. When they sent engineers to Fagan’s course, they saw no difference in what they were doing, but the Fagan-style inspection was more formal. After adding formality to their inspection process and visibly reporting defect discovery rates (including the incoming quality of work products), they observed a five-fold improvement in phase containment effectiveness. Formality and data collection/analysis are typical attributes of high maturity inspection processes, and a number of inspection variants have been developed [Ebenau93, Freedman90, Knight93, Mashayekhi93]. Gilb’s emphasis on inspection sampling [Gilb93], rather than 100% inspection, to guide process and product decisions is worthy of note, particularly in light of the shift to SQA sampling at the higher maturity levels. 3.9

Quantitative Process Management Versus Statistical Thinking

Conceptually, maturity levels 4 and 5 are based on statistical process control [Florac97, Wheeler98], although this was initially stated in terms of operational definitions and comparability in the presence of variation [Humphrey88]. Level 4 focuses on control – identifying and removing special causes of variation in the process, the extraordinary events that prevent the process from performing as intended. Level 5 focuses on improvement – addressing the common causes of variation that are intrinsic to the process. More generally, high maturity organizations appreciate the fundamentals of “statistical thinking” – all work is a series of interconnected processes, all processes are variable, decisions should be based on facts, and a reduction in variation provides improvement opportunities. The level 4 key process areas, however, talk about “quantitative management” rather than “statistical control.” The CMM distinguishes between thresholds (desired or expected performance) at level 3 and control limits (what the process can do) at level 4, but the terminology used in the level 4 practices is “acceptable limits.” In CMM v1.0, published in 1991, we explicitly stated in Process Measurement and Analysis that “The organization's standard software process is stable and under statistical quality control,” but this was softened in version 1.1 to more accurately reflect the state-of-the-practice in software engineering. Most level 4 and 5 organizations were appraised using a relaxed interpretation of what is meant by “quantitative management” at level 4. Only a few high maturity organizations are consistently performing rigorous statistical analyses, even today. Most of the analyses seem to be centered around detection and prediction of defects, and the primary business 11

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

questions asked are, “Is the number of defects going down because of process improvement or because defects aren’t being found? Are there any early warnings of a risk that should be identified and managed?” High maturity organizations understand the impact of variation on processes and predictability. Even when not using control limits in a rigorous sense, they use “acceptable limits” to trigger analysis, usually based on 1, 2, or 3 standard deviations from the average. These are really run (trend) charts, rather than control charts. Behaviorally, the crucial characteristic of high maturity organizations is that the data are linked back to the process, and the analyses are used to stabilize the process. High maturity organizations systematically use measurement in their management and engineering processes, but it would be generous to characterize them as using statistical process control when initially appraised at level 4+ (although a few were using prediction intervals and confidence intervals). The systematic use of measurement does, however, place a software organization among the elite, given the current state-of-the-practice in software engineering, and one of the consequences of continual process improvement is that even level 5 organizations are always looking for opportunities to improve. Seven of the thirteen organizations surveyed are now using control charts, and four are piloting their use. Normalizing the data may be a concern, since normalization can hide important information contained in the raw data. Is there better insight in the raw numbers of defects found in a module? Or in the defect density found by dividing by the module’s size? This seems to be situational; some high maturity organizations have found normalization to be problematic, others consider it to be an integral part of the data analysis. We have made some suggestions on alternative ways of normalizing data (e.g., process step normalization as opposed to product normalization [Langston96]), and for some data sets and analyses, normalization may not be necessary (e.g., control chart data that is plus-or-minus 20% [Wheeler98, p. 172]). Due to drastic declines in major defects, much of the analysis may be of minor defects. One high maturity organization has reported a high correlation (nearly 0.90) between a large number of minor defects and encountering a major defect downstream. Some organizations have chosen to emphasize cost of quality [Dion93, Haley96] as a quantitative management technique. Cost of quality is a powerful tool for process improvement, since it separates project costs into categories for appraisal, prevention, internal failure, and external failure. The latter two categories result in rework. For most software organizations, rework is 40-50% of project costs. High maturity organizations can get rework costs under 10%, which results in major cost savings and cycle time reductions. Five of the thirteen organizations surveyed use cost of quality analysis, and two others are piloting it.

12

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

There is a tension between having stable process and continual process improvement. High maturity organizations use “quantitative management” of their processes, but they are continually improving those processes, which may invalidate the data collected. Most of the changes are incremental, but compounded small changes can lead to dramatic improvements. High maturity organizations recognize the change-stabilize-control cycle and factor its impacts into their prediction and control mechanisms [Dion93]. New processes are typically deployed to new projects rather than perturbing existing projects [Diaz97], although incremental improvements are deployed to all projects. Process suboptimization is a concern in high maturity organizations. Focusing on measuring and improving individual subprocesses can lead to inefficient performance of the overall project process. For example, an acquisition system can minimize development cost, which results in dramatically higher overall life cycle costs; the acquisition agency optimizes its cost, but at the expense of the customer in the larger sense. 3.10

Software Quality

The data collected at level 4 can provide great insight into software quality, and the implementation of Quantitative Process Management and Software Quality Management are usually tightly linked. The focus of Quantitative Process Management tends to be at the micro-level, however, as executed by individuals and teams, while the focus of Software Quality Management is more of a macro-level project issue. From both perspectives, the in-process use of measurement to drive decisions is intrinsic to the high maturity process. Among the software quality methods that one might expect to see in high maturity organizations are Quality Function Deployment (QFD) [Hauser88, Zultner95], reliability models [Musa90], and software certification [Dymond96, Grady92]. Only three of the thirteen organizations surveyed were using QFD, only one (with two piloting) was using reliability models, and only six were doing software certification. 3.11

Capturing Product and Application Domain Knowledge

High maturity organizations systematically capture product knowledge as well as process knowledge. One research effort [Besselman95] noted that firms organized around product lines are more likely to possess higher levels of maturity. The emphasis on product lines included significant reuse of software architectures and components. Our observations of high maturity practices substantiated this research to the point of proposing a new key process area that would capture the reuse and product line practices at a high level of abstraction. A draft of Organization Software Asset Commonality may be seen in Software CMM version 2 draft C at www.sei.cmu.edu/cmm/draft-c/c.html. Seven of the thirteen organizations surveyed are doing “systematic” reuse, and eight have product lines. Two organizations indicated that they are piloting product lines, but not systematic reuse, and one organization is piloting systematic reuse, but not product lines. One organization in the survey provided no answer to the product line and systematic 13

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

reuse questions. The nine other organizations of the thirteen surveyed were doing either, or both, product lines and systematic reuse. 3.12

Incremental and Revolutionary Improvement

In the journey of continual process improvement, once an organization achieves level 5, the improvement emphases turn to automation, technology, and people issues. These, along with enterprise-level improvement activities, are ongoing concerns, but after the lowhanging fruit of the lower maturity levels are plucked, these become the major leverage points. The technology change management process is robust and vital in high maturity organizations, which must deal with: § incremental change [Masaaki86] § revolutionary change § innovation [Daghfous91] § technology transition [Kemerer92] The high maturity organization automates its processes wherever possible, so engineers and managers can focus on the problem-solving aspects of their work. They usually have good automated support for defining their software processes, deploying those processes across the organization, instrumenting processes to collect both process and product data, and performing data analyses [Dymond96, Pfleeger92]. Typical automation areas include: § online repository of software engineering processes and management practices § time sheet automation, to collect effort data in useful categories § database of intergroup and intragroup commitments and their status § organization process capability database, to provide process capability baseline data to projects

4.

Conclusion

What does it mean to be level 4 or 5? The high maturity organization is a learning organization [Senge92]. High maturity organizations § understand why they are doing what they are doing [Langley95] § know "what to do" when problems are encountered (don't overreact to special causes – concentrate on finding common causes) [Wheeler93, Wheeler98] § error-proof their processes to allow for human fallibility § convert "blame" into "opportunity" (never use fear as a motivator) [Austin96] § balance "empowerment" and "ownership" with " control" [Simons95] § measure and predict how much further they have to go to achieve their goals One of the challenges for any organization is dealing with organizational restructuring – mergers, acquisitions, re-organizations, and rapid growth. Each merger or re-organization can dramatically change the culture of the “original” organization. Although process maturity can help an organization going through such changes, it is still a rocky road. Onboard Shuttle, for example, was part of IBM when initially assessed at level 5, then it became part of Loral, then Lockheed Martin, and it has now become part of United Space

14

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Alliance – a dizzying journey over the last decade. Process maturity – and executive recognition of that maturity – can help an organization protect the stability and integrity of its processes during the turbulence of organizational change. In a very real sense, the objective of CMM-based process improvement is to make software development and maintenance a real engineering discipline. As characterizes the Onboard Shuttle project [Fishman97]: “The most important things the shuttle group does – carefully planning the software in advance, writing no code until the design is complete, making no changes without supporting blueprints, keeping a completely accurate record of the code – are not expensive. The process isn’t even rocket science. It’s standard practice in almost every engineering discipline except software engineering.”

Acknowledgements. I would like to thank all of the people who have provided me with insight into high maturity practices and/or reviewed this paper, including: Julie Barnard, Carrie Buchman, Kelley Butler, Anita Carleton, Cora Carmody, Mary Beth Chrissis, Pat Cosgriff, Sylvia Courtney, Bill Curtis, Michael Diaz, K. Dinesh, Ken Dymond, Khaled El Emam, Susan Ford, Greg Fulton, John Gibson, Vivek Govilkar, Atul Gupta, Rick Hefner, Andre Heijstek, Johnnie Henderson, Peter Koester, Susan Meade, Corrine Miller, K.S. Murthy, Dan Nash, John Pellegrin, Mary Lynn Penn, Jeff Perdue, Bill Peterson, Tom Peterson, David Putman, Sarala Ravishankar, Alan Sholtes, Marie Silverthorn, Paul Swart, Subbarao Tangirala, R. Venkatakrishnan, Charlie Weber, Donald Wheeler, Dave Whitten, Gary Wigle, Gary Wolf, George Yamamura, Mike Yanega, and Ralph Young.

15

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

References Austin96

Robert D. Austin, Measuring and Managing Performance in Organizations, Dorset House Publishing, ISBN: 0-932633-36-6, New York, NY, 1996.

Basili96

Victor R. Basili, “Applying the Goal/Question/Metric paradigm in the experience factory,” Chapter 2 in Software Quality Assurance and Measurement : A Worldwide Perspective, Norman Fenton, Robin Whitty, and Yoshinori Lizuka (editors), ISBN: 1850321744, International Thomson Publishing, London, UK, April 1996.

Besselman95 Joe Besselman and Stan Rifkin, "Exploiting the Synergism Between Product Line Focus and Software Maturity," Proceedings of the 1995 Acquisition Research Symposium, Washington, D.C., pp. 95-107. Billings94

C. Billings, J. Clifton, B. Kolkhorst, E. Lee, and W.B. Wingert, "Journey to a Mature Software Process," IBM Systems Journal, Vol. 33, No. 1, 1994, pp. 46-61.

Briand97

L. Briand, K. El Emam, B. Freimut, and O. Laitenberger, "Quantitative Evaluation of Capture Recapture Models to Control Software Inspections," Proceedings of the Eighth International Symposium on Software Reliability Engineering, 1997, pp. 234-244.

Butler95

Kelley L. Butler, "The Economic Benefits of Software Process Improvement," Crosstalk: The Journal of Defense Software Engineering, Vol. 8, No. 7, July 1995, pp. 14-17.

Butler97

Kelley Butler, "Process Lessons Learned While Reaching Level 4," Crosstalk: The Journal of Defense Software Engineering, Vol. 10, No. 5, May 1997, pp. 4-8.

Clark97

Bradford K. Clark, "The Effects of Software Process Maturity on Software Development Effort," PhD Dissertation, Computer Science Department, University of Southern California, August 1997.

Constantine95 Larry L. Constantine, Constantine on Peopleware, Yourdon Press Computing Series, Englewood Cliffs, NJ, 1995. Curtis95

Bill Curtis, William E. Hefley, and Sally Miller, "People Capability Maturity Model," Software Engineering Institute, Carnegie Mellon University, CMU/SEI-95-MM-02, September 1995.

Daghfous91

Abdelkader Daghfous and George R. White, “Information and Innovation: A Comprehensive Representation,” Technical Report 91-4, Department of Industrial Engineering, University of Pittsburgh, 1991.

DeMarco87

Tom DeMarco and Timothy Lister, Peopleware, Dorset House, New York, NY, 1987.

Deming86

W. Edwards Deming, Out of the Crisis, MIT Center for Advanced Engineering Study, Cambridge, MA, 1986.

Diaz97

Michael Diaz and Joseph Sligo, "How Software Process Improvement Helped Motorola," IEEE Software, Vol. 14, No. 5, September/October 1997, pp. 75-81.

Dion93

Raymond Dion, "Process Improvement and the Corporate Balance Sheet," IEEE Software, Vol. 10, No. 4, July 1993, pp. 28-35.

16

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Dymond96

Ken Dymond, “The Level 4 Software Process from the Assessor’s Viewpoint,” Conference of the International Software Consulting Network, Brighton, UK, December 1996.

Ebenau93

Robert G. Ebenau and Susan H. Strauss, Software Inspection Process, McGraw Hill, New York, NY, 1993.

Fagan86

M.E. Fagan, "Advances in Software Inspections," IEEE Transactions on Software Engineering, Vol. 12, No. 7, July 1986, pp. 744-751. Reprinted in Software Engineering Project Management, R.H. Thayer (ed), IEEE Computer Society Press, IEEE Catalog No. EH0263-4, 1988, pp. 416-423.

Fishman97

Charles Fishman, “They Write the Right Stuff,” Fast Company, December/January 1997, pp. 2-7.

Florac97

William A. Florac, Robert E. Park, and Anita D. Carleton, "Practical Software Measurement: Measuring for Process Management and Improvement," Software Engineering Institute, Carnegie Mellon University, CMU/SEI-96-HB-003, April 1997.

Fowler97

Kimsey M. Fowler, Jr, “SEI CMM Level 5: A Practitioner's Perspective,” Crosstalk: The Journal of Defense Software Engineering, Vol. 10, No. 9, September 1997.

Freedman90

Daniel Freedman and Gerald M. Weinberg, Handbook of Walkthroughs, Inspections, and Technical Reviews, Third Edition, Dorset House, New York, NY, 1990.

Gilb93

Tom Gilb, Dorothy Graham, and Susannah Finzi, Software Inspection, Addison-Wesley, Reading, MA, 1993.

Goldratt97

Eliyahu M. Goldratt, Critical Chain, North River Press, Great Barrington, MA, 1997.

Grady92

Robert B. Grady, Practical Software Metrics For Project Management and Process Improvement, ISBN: 0137203845, Prentice Hall, Englewood Cliffs, NJ, May 1992.

Haley95

T. Haley, B. Ireland, E. Wojtaszek, D. Nash, and R. Dion, “Raytheon Electronic Systems Experience in Software Process Improvement,” Carnegie Mellon University, Software Engineering Institute, CMU/SEI-95-TR-017, November 1995.

Haley96

Thomas J. Haley, "Raytheon's Experience in Software Process Improvement," IEEE Software, Vol. 13, No. 6, November 1996, pp. 33-41.

Hare95

Lynne B. Hare, Roger W. Hoerl, John D. Hromi, and Ronald D. Snee, "The Role of Statistical Thinking in Management," ASQC Quality Progress, Vol. 28, No. 2, February 1995, pp. 53-60.

Hauser88

J.R. Hauser and D. Clausing, "The House of Quality," Harvard Business Review, MayJune 1988, pp. 63-73. Reprinted in IEEE Engineering Management Review, Vol. 24, No. 1, Spring 1996, pp. 24-32.

Hayes97

Will Hayes and James W. Over, "The Personal Software Process (PSP): An Empirical Study of the Impact of PSP on Individual Engineers," Software Engineering Institute, Carnegie Mellon University, CMU/SEI-97-TR-001, December 1997.

17

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Herbsleb97

James Herbsleb, David Zubrow, Dennis Goldenson, Will Hayes, and Mark Paulk, "Software Quality and the Capability Maturity Model,” Communications of the ACM, Vol. 40, No. 6, June 1997, pp. 30-40.

Humphrey88

Watts S. Humphrey, "Characterizing the Software Process," IEEE Software, Vol. 5, No. 2, March 1988, pp. 73-79.

Humphrey89

Watts S. Humphrey, Managing the Software Process, ISBN 0-201-18095-2, AddisonWesley, Reading, MA, 1989.

Humphrey91

Watts S. Humphrey, Terry R. Snyder, and Ronald R. Willis, "Software Process Improvement at Hughes Aircraft," IEEE Software, Vol. 8, No. 4, July 1991, pp. 11-23.

Humphrey95

Watts S. Humphrey, A Discipline for Software Engineering, ISBN 0-201-54610-8, Addison-Wesley Publishing Company, Reading, MA, 1995.

Kemerer92

Chris F. Kemerer, "How the Learning Curve Affects CASE Tool Adoption," IEEE Software, May 1992, pp. 23-28.

Knight93

John C. Knight and E. Ann Myers, "An Improved Inspection Technique," Communications of the ACM, Vol. 36, No. 11, November 1993, pp. 51-61.

Krasner94

Herb Krasner, Jerry Pyles, and Harvey Wohlwend, “A Case History of the Space Shuttle Onboard Systems Project,” SEMATECH, Technology Transfer 94092551A-TR, 31 October 1994.

Langley95

Ann Langley, "Between 'Paralysis by Analysis' and 'Extinction by Instinct'," Sloan Management Review, Vol. 36, No. 3, Spring 1995, pp. 63-76.

Langston96

Dale Langston, “Framework for Statistical Process Control in a Development Environment,” Level 4 Workshop, Software Productivity Consortium, 19 March 1996.

Masaaki86

Imai Masaaki, Kaizen: The Key to Japan's Competitive Success, McGraw-Hill, New York, NY, 1986.

Mashayekhi93 Vahid Mashayekhi, Janet M. Drake, Wei-Tek Tsai, and John Riedl, "Distributed, Collaborative Software Inspection," IEEE Software, Vol. 10, No. 5, September 1993, pp. 66-75. MIEL95

"Software Development: The MIEL Experience," 1995 SEPG Conference, Boston, MA, 22-25 May 1995.

Miller98

Corinne Miller, “Sustaining a Continuous Improvement Culture: From Start-Up Venture to Big Business in a Decentralized Culture,” Proceedings of the 1998 Software Engineering Process Group (SEPG) Conference, Chicago, IL, 9-12 March 1998.

Musa90

J.D. Musa and W.E. Everett, "Software-Reliability Engineering: Technology for the 1990s," IEEE Software, Vol. 7, No. 6, November 1990, pp. 36-43.

Paulk95

Carnegie Mellon University, Software Engineering Institute (Principal Contributors and Editors: Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis), The Capability Maturity Model: Guidelines for Improving the Software Process, ISBN 0201-54664-7, Addison-Wesley Publishing Company, Reading, MA, 1995.

18

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Paulk96

Mark C. Paulk, "Effective CMM-Based Process Improvement," Proceedings of the 6th International Conference on Software Quality, Ottawa, Canada, 28-31 October 1996, pp. 226-237.

Pfleeger91

S.L. Pfleeger, "Process Maturity as a Framework for CASE Tool Selection," Information and Software Technology, November 1991.

Senge92

Peter Senge, "Building Learning Organizations," The Journal for Quality and Participation, March 1992. Reprinted in IEEE Engineering Management Review, Vol. 24, No. 1, Spring 1996, pp. 96-104.

Simons95

Robert Simons, "Control in an Age of Empowerment," Harvard Business Review, MarchApril 1995, pp. 80-88.

Thamhain96

Hans J. Thamhain, Engineering Management: Managing Effectively in Technology-Based Organizations, John Wiley & Sons, New York, NY, 1996.

Weinberg94

Gerald M. Weinberg, Quality Software Management Volume 3: Congruent Action, Dorset House Publishing, New York, New York, 1994.

Wheeler93

Donald J. Wheeler, Understanding Variation: The Key to Managing Chaos, SPC Press, Knoxville, TN, 1993.

Wheeler98

Donald J. Wheeler and Sheila R. Poling, Building Continual Improvement: A Guide for Business, SPC Press, Knoxville, TN, 1998.

Wigle97

Gary B. Wigle and George Yamamura, “Practices of an SEI CMM Level 5 SEPG,” Crosstalk: The Journal of Defense Software Engineering, Vol. 10, No. 11, November 1997, pp. 19-22.

Willis98

R.R. Willis, R.M. Rova, et al, “Hughes Aircraft's Widespread Deployment of a Continuously Improving Software Process,” Software Engineering Institute, Carnegie Mellon University, CMU/SEI-98-TR-006, May 1998.

Yamamura97 George Yamamura and Gary B. Wigle, "SEI CMM Level 5: For the Right Reasons," Crosstalk: The Journal of Defense Software Engineering, Vol. 10, No. 8, August 1997, pp. 3-6. Zultner95

Richard E. Zultner, "Blitz QFD: Better, Faster, and Cheaper Forms of QFD," American Programmer, Vol. 8, No. 10, October 1995, pp. 24-36.

Zultner98

Richard E. Zultner, "Critical Chain – Doing Development Faster With Quality," Joint 1998 Proceedings of the Pacific Northwest Software Quality Conference and the Eighth International Conference on Software Quality, Portland, Oregon, 13-14 October 1998, pp. 26-37.

19

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Appendix A. Conclusions from the 1996 Level 4 and 5 Workshop Six organizations participated in the Level 4 and 5 workshop: • Citicorp Information Technology Industries, Ltd (CITIL), Bombay, India. Atul Gupta and Vivek Govilkar. • Lockheed Martin Space Information Systems (now United Space Alliance) Onboard Shuttle (OBS), Houston, TX. Peter Koester and Tom Peterson. • Motorola GSTG Government Electronics Division, Scottsdale, AZ. Michael Diaz. • Motorola India Electronics Pvt. Ltd (MIEL), Bangalore, India. Sarala Ravishankar. • Raytheon Electronic Systems, Sudbury, MA. Gary Wolf and Sylvia Courtney. • TRW, Redondo Beach, CA. Rick Hefner. At the end of the 1996 Level 4 and 5 workshop, a number of general observations were made. The six high maturity organizations then voted on which ones were characteristic of their practices. That summary of the workshop is listed below. The wording of the observations has been slightly rephrased to be clearer to those who did not attend the workshop. # from 6

1. 2. 3.

4 6 5

4. 5. 6. 7. 8.

4 6 4 5 6

9.

4

10. 11. 12.

6 3 5

13. 14. 15. 16. 17. 18. 19. 20.

6 5 3 3 5 3 5 3

Observation

Systematic reuse is used (as opposed to opportunistic reuse). Incremental and/or evolutionary software life cycles are used. Major new processes are adopted primarily by new projects, rather than perturbing existing projects. Impact of reorganizations is detrimental to high maturity. Other things than process are important. Automated collection of data is used. The World Wide Web is used for process deployment. Basic statistics are used, as opposed to “sophisticated” statistics, but decisions are actively based on quantitative analysis. There is a shift from revolutionary to evolutionary change as high maturity organizations deal with asymptotic trends after “low-hanging fruit” are addressed. Useful historical data are not strictly quantitative. There is an emphasis on strategic business planning. Root cause analysis is incorporated into inspections, because the "data" are not inherently insightful (critical contextual information is lost over time). Post mortem information is formally used for software process improvement. Focus on "phase containment" as a good source of data. Formal mentoring program (more than just assigning a mentor) is used. Peer pressure is important to promote desired behavior. Process tailoring to application domains is performed. Process tailoring to customers needs and requests is performed. Educate the customer (with data). Process owners (teams) are practitioners, possibly different from the SEPG.

20

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Appendix B. Results of Survey of the Practices of High Maturity Organizations The purpose of the survey was to confirm (or challenge) my observations about the practices of higher maturity organizations. It was not intended to solicit data on practices that distinguish between higher and lower maturity organizations or to describe how tools and methods change as an organization matures. The tools and methods listed in the survey are frequently recommended to support good engineering and management practices. In some cases, however, it is clear that they are not broadly used, and it may be arguable whether they add true business value (e.g., few software organizations systematically use control charts). By studying the deployment of these tools and methods in higher maturity organizations, I hoped to gain a more systematic understanding of the state-of-the-practice for level 4 and 5 organizations and what practices high maturity organizations are aware of, and perhaps considering piloting. Thirteen organizations provided responses to the survey: § Boeing, Inertial Upper Stage (Mike Yanega and Greg Fulton) § Ericsson Telecommunicatie BV. Netherlands, Unix Development Department (Paul Swart) § Infosys Technologies (K. Dinesh) § Lockheed Martin, Federal Systems Owego (Alan Sholtes) § Lockheed Martin, Management and Data Systems (Mary Lynn Penn) § Lockheed Martin, Mission Systems (John Gibson and Susan Meade) § Motorola, GSM Products Division – Base Station Systems (Corinne Miller) § Raytheon Systems Company, Sensors and Electronics (Dan Nash) § United Space Alliance, Onboard Space Shuttle (Julie Barnard, Susan Ford, Pete Koester, and Johnnie Henderson) § US Air Force, Ogden Air Logistics Center (OO-ALC/TIS) (Pat Cosgriff) § US Air Force, Oklahoma City Air Logistics Center (OO-ALC/LAS) (Kelley Butler) § Wipro Infotech Group, Enterprise Solutions Division (Subbarao V. Tangirala) § Wipro Infotech Group, Technology Solutions Division (K. Sreenivasa Murthy) The scale used in the survey: § Not Used – The tool/method/approach is not typically used in the organization, although individuals may use it on an ad hoc basis. § Rejected – The tool/method/approach has been considered, perhaps even piloted, but it has been rejected for common use by the organization, although some special cases of use may occur rarely. If a different tool has been selected, e.g., function points over lines of code, then lines of code should be considered “rejected” (it is also possible that both could be “standardized” if, for example, projects could select which they used). § Pilot Use – The tool/method/approach is currently being piloted and may come into common use later, depending on the results of the pilot.

21

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

§ Common Use – The tool/method/approach is used by many, perhaps even in most opportunities, but it should not be considered an institutionalized, standard way of doing things across the organization. § Standardized – The tool/method/approach is embedded in the organization’s set of standard software processes or the way the organization is structured. The tool is expected to be used whenever an opportunity for its effective use arises – it is institutionalized. Instances where it is not used are comparatively rare, e.g., legacy systems or customer requirements to use other tools. On some surveys there were questions that had no answer filled in. Although separately listed below, they were grouped with the “not used” responses, and comments frequently indicated that the respondent was unfamiliar with the tool, at least under the name used. A deficiency of the survey is that the brief descriptions provided may have been insufficient to acquaint the respondents with the method, and the organizations may be using the method under another name. The survey results below are from: • 1 maturity level 3 organization • 6 maturity level 4 organizations • 6 maturity level 5 organizations It should also be noted that 11 of the 13 organizations have ISO 9001 certification. Some interesting points observed in the survey and not discussed in the paper: • All of the organizations used lines of code as a size measure (including maintenance organizations, where size measures provide minimal value). Only two organizations used function points (and one organization commented on the difficulty of using lines of code for measuring graphical languages), and six had considered and actively rejected function points as a size measure. This is a fairly surprising result given the popularity of function points in commercial software environments. • Critical chain project management is a little-known, but interesting, project management approach [Zultner98, Goldratt97] that applies a statistical understanding of estimating. None of the organizations surveyed is using or piloting it. • It was somewhat surprising that only eight of the organizations surveyed were using earned value [Thamhain96], a management tool that is strongly supported by the U.S. Department of Defense (see www.acq.osd.mil/pm/). Effective earned value depends on a fairly detailed work breakdown structure – to the “binary inchstone” level, but this would seem to be a management tool that high maturity organizations would naturally gravitate toward. • I was somewhat surprised that five organizations were using formal methods, including three in the U.S. and two DOD contractors. • Capture recapture models are a little-known, but interesting, method for predicting defects [Briand97]. They are based on statistical methods for estimating wildlife

22

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

populations. One organization in the survey has standardized their use, and another is piloting them. • The comparative lack of use of PSPSM and TSPSM is worthy of further study, particularly in those organizations that have rejected using them.

Tool, Method, Approach 1. Cost models2

2. Lines of code 3. Function points 4. Critical path method (CPM) 5. Project evaluation and review technique (PERT) 6. Critical chain project management 7. Earned value 8. Activity-based costing 9. Evolutionary life cycle models3 10. Incremental life cycle models4 11. “Systematic” risk management5 12. Independent SQA group 13. SQA function embedded in process6 14. Independent test group 15. IDEF0 or SADT for process definition 16. ETX, ETVX or EITVOX for process definition

Not Used

Rejected

Pilot Use

Common Use

Standardized

3 0 3 5

1 0 6 0

1 0 2 0

5 3 0 6

3 10 2 2

6

0

0

5

2

10

0

0

0

0

5 5

0 0

0 1

1 3

7 4

1

0

2

4

5

0

0

3

8

1

0

1

4

7

0

0

0

0

13

0

0

0

1

12

1

1

0

2

9

9

3

0

0

0

1

1

0

7

(3 no answer)

(1 no answer)

0 (2 no answer)

(1 no answer)

3 (1 no answer)

2

Examples of cost models include COCOMO, COCOMO II, Price-S, SLIM, and SPR. Examples of evolutionary life cycle models include spiral model and rapid prototyping. 4 Long-term maintenance projects, such as Onboard Shuttle, would be an example of an incremental life cycle. 5 Examples of systematic approaches to risk management include SPMN’s “top 10” risks and the SEI’s taxonomy-based risk identification. 6 For example, a role in the peer review method, via buddy system, or as Software Configuration Management entry criteria for baselining. 3

23

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Tool, Method, Approach

Not Used

Rejected

Pilot Use

Common Use

Standardized

17. Web-based process deployment7 18. CASE tools 19. Cost of quality analysis 20. Formal methods 21. Cleanroom 22. Software certification8 23. Quality function deployment (QFD) 24. Systematic reuse9

0

0

0

5

8

2 6

2 0

0 2

7 4

2 1

5 9 5

2 3 1

1 1 0

0 0 1

5 0 5

1

0

3

0

0

1

4

3

0

2

3

5

11

0

2

0

0

0 2 6

0 0 0

3 4 0

3 2 4

7 5 3

1 7

0 0

2 3

4 1

6 1

0

3

1

2

9

0

3

1

0

10

0

2

1

0

10 10

0 0

2 1

0 0

1 1

(1 no answer)

8 (1 no answer)

4 (1 no answer)

25. Product lines

2 (1 no answer)

26. Process simulations and/or models10 27. Run (trend) charts 28. Control charts11 29. Orthogonal defect classification (ODC) 30. Pareto analysis 31. Prediction intervals12 32. Confidence intervals 33. Design of experiments 34. Analysis of (co)variance (ANOVA, ANCOVA) 35. Reliability models 36. Capture recapture models

(1 no answer)

6 (1 no answer)

(1 no answer)

7

Do you make your process assets (standard software processes) available by Web page, intranet, or other electronic deployment means? 8 In the sense that the software must satisfy strict, quantitative exit criteria for quality before release, e.g., defect trends or predicted mean-time-to-failure. HP’s software certification is an example. 9 “Systematic” as opposed to “opportunistic” reuse is characterized by an overall strategy for reuse within the organization, e.g., domain specific software architectures. 10 These are automated models for doing “what if” studies of process performance and impact analyses. System dynamics models, such as those by Forrester and Abdel-Hamid, are examples. 11 If you are using control charts, please include a comment on what specific ones you are using, e.g., XbarR, XmR, u, Z, cusum, etc. 12 Any organization using the Personal Software Process (PSP) is using prediction intervals.

24

1999 SEPG Conference, Atlanta, Georgia, 8-11 March 1999.

Tool, Method, Approach

Not Used

Rejected

Pilot Use

Common Use

Standardized

37. Personal Software ProcessSM (PSPSM) 38. Team Software ProcessSM (TSPSM) 39. Integrated product & process development (IPPD) 40. Structured brainstorming13

8

3

2

0

0

11

1

1

0

0

4

0

1

2

5

0

4

4

2

(1 no answer)

3

13

Examples of structured brainstorming techniques include Nominal Group Technique and Delphi methods.

25