A Simplified Model for Software Inspection

21 downloads 0 Views 536KB Size Report
The idea of inspection was formally introduced by Fagan in 1976 [1]. IEEE [2] defines ...... Abrahamsson, and Richard Messnarz (Eds.). Springer-Verlag, Berlin ...
A Simplified Model for Software Inspection Sanjay Misra1, Luis Fernandez-Sanz2, Ricardo Colomo-Palacios3 1

Department of Computer Engineering, Atilim University, Ankara, Turkey Department of Computer Science, University of Alcala, Alcala de Henares, Spain 3 Computer Science Department, Universidad Carlos III de Madrid, Spain

2

[email protected], [email protected] , [email protected]

Abstract. Software inspection is considered a cost effective quality assurance technique in software process improvement. Although inspections detect the majority of defects in the early stages of the development process, this technique is not a common practice in the software industry, especially in small and medium enterprises (SMEs). In this paper, we propose a model for the inspection process intended to be applicable and acceptable to both SMEs and large software organisations. The model was implemented in two organisations: one in a medium scale company and the other one in a department of a big company where its feasibility and benefits were confirmed. A comparison to recent alternative inspection models has also been performed showing the practicality of the proposal as well as ease of adoption and cost-effectiveness.

Keywords: software quality, software inspection, software development process, inspection meetings, SME

1 Introduction The idea of inspection was formally introduced by Fagan in 1976 [1]. IEEE [2] defines inspection as a peer review process led by software developers who are trained in inspection techniques. After the seminal work by Fagan work on inspection, a number of new inspection techniques have been developed. Amongst the number of published references on this issue, we highlight some relevant contributions in this area [3-12]. In general, most of the inspection proposals adopt Fagan’s technique although they provide some process alternatives. The changes are mainly focused on how the steps of Fagan’s basic process are adopted and

adapted according bearing in mind the rapid evolution of the relevant technologies. For example, most of the manual labour-intensive activities in initial inspection are now supported by communication tools and collaborative environments; in Fagan's times, computers were mainly used for projects in some special languages (e.g. FORTRAN 77) and were not in general use, so many supporting tools were not easily available at that time. Therefore, most of the tasks were done manually while at the present time, m o st o f t h e m a r e c o m p u t e r o p e r a ted, t h u s b y r e d u c i n g traditional documentation we enable faster interaction and greater capability. Code and design reviews have a clear high impact on software quality [13] and they have proved both to be effective, cost-efficient (see e.g. [14] [15] and with improved Return On Investment (ROI) and cost calculations in [16] [17]). However inspections are not so widely used by software companies because they are considered lengthy and time consuming [18]: e.g. for an ideal inspection process, industry experience shows that, as an average, it takes one inspector-hour to find one defect [19] making inspection an expensive task. As an example of the two fold nature of inspections regarding cost and results, Wilkerson [20] compared in a recent study the defect reduction benefits of code inspection and Test Driven Development (TDD) and concluded that inspection was more effective than TDD while also discovering that inspection was a more expensive process. An ideal inspection process and its implementation are therefore cost-prohibitive for many SMEs. In a recent survey on SME software quality assurance activity, the authors of the paper have observed that majority of organisations are not adopting software inspection/review or any variants of the inspection process in their project practices [21]. The study addressed SMEs ranging from micro companies totalling no more than 10 people to medium size companies where the number of employees approaches 40-50. Although these companies produce different types of software products and applications, we observed that organisations are rarely focused on specific quality assurance activities [21]. The majority of the studies [21-24] suggest that SMEs are reluctant to adopt inspections (something also happening in large companies on a smaller scale) as this requires a non-trivial initial investment in resources: the processes are time consuming and expensive although results pay off. The SME represents the majority of industry scenarios when compared to big/large companies. Therefore it is important to investigate if the above factors (cost and time) are the real cause for not adopting SI. We can summarize the findings of our investigations as to why a large number of small software companies are not adopting inspection process with the following points:

 The owners/managers of companies are not especially open to the perception of the benefits of inspection so they consider it as low-tech method [25]. Even then, they frequently ignore the difference between a rigorous testing process and inspections, so they tend to question why promoting the implementation of inspections, which seems to be an unnecessary and useless expenditure of money and resources.  A major limitation, especially severe in small-scale industry, is the usual scarcity and lack of availability of experts in specialised fields who can participate in an inspection process [26]. Additionally, most of the experienced people in these organisations are overloaded with other assignments.  It is common to have a low proportion of employees with a profile capable of coping with a multiple role assignment [27]. Shortage of resources is especially intense when allocating dedicated staff and knowledge to launch quality initiatives [28].  Most of the staff works heavy pressure to complete projects. SMEs often have few expert developers and they face severe difficulties in participating in inspection processes. Chroust and Lexen [26] report that approximately 50% of Austrian software companies have less than 5 employees, furthermore, developers and teams members do not instinctively know how to conduct and contribute to SI, a major factor for rejecting this practice [28].

In summary, all these observations motivated us to develop a practical inspection process acceptable to the software industry, especially to SMEs. Additionally, we have also tried to analyse the role of meetings in inspection processes. It is worth mentioning here that review and inspection are synonymous terms and are frequently used in literature interchangeably [14]. Authors [14] [29] have mentioned that no clear difference exists between the terms review and inspection in the literature. Several authors of the inspection-related articles use the term 'inspection', while others use the more general term 'review', even though they actually discuss similar kinds of process [30]. In fact, inspection is a review process applicable in all software development phases for a large variety of products [9]. In this article, authors review the issues related to the inspection process, especially the relevance of meetings. Based on the outcomes of section 2, we develop a practical model for inspections presented in section 3. The validation of the model is given in two subsections of section 4. Two case studies are

described in section 4.1 together with the observations and experiences extracted from them. The comparison with other recently proposed models is discussed in section 4.2. Finally, main conclusions are presented in section 5.

2. Need for a Practical Inspection Process: Evaluation of Software Inspection Fagan [1, 2] proposed an inspection process with six basic steps: planning, overview, individual preparation, inspection (group review meeting), rework and follow up. Amongst these steps, the inspection (group review meeting) is probably the most important one and so it has attracted an extended debate in the literature [31]. In this section, we first discuss the motivations and guiding ideas which have inspired our proposal. Then we will discuss the role and relevance of meeting in software inspection.

Scarce implementation of inspection processes: we have performed an exhaustive literature survey, obviously supported by existing previous surveys and systematic literature reviews (SLR) [41], [42], especially the most recent ones [28], [30]. Macchi and Solari [28] concluded that low adoption of the inspection process is induced by several factors including developers’ perceptions about the process, lack of training, characteristics of the process- such as complexity etc. In the study of Kollanus & Koskinen [30], these authors suggested performing more empirical studies for better understanding of inspection process in different types of organisations and environments. We have identified and selected two major issues in SI-related literature (on which we have worked). The first one is the relatively scarce presence of inspection in software organisations (especially in SMEs) [2124] despite the wide consensus on the benefits of inspections. The analysis of underlying reasons shows that cost and time required to be invested in the inspection process are the two major factors which hinder the adoption of inspection by the SME [21]. These factors are complemented by the process itself and the absence of training as the most-mentioned factors impacting the general adoption of inspections [28]. This fact motivates our aim at presenting a practical inspection process which could be simpler to implement, less time consuming, and applicable to all types of projects and organisations (both SMEs and larger companies). This could be done by adopting a realistic approach for the process instead of forcing the adoption of an 'ideal' inspection process model, as requested in [28]. The second issue to be progressed for

this new process of inspection is the relevance of meetings during the process which we will discuss in details in the next paragraphs. Fagan’s perception of meetings in the Inspection Process and evaluation of the inspection process: The idea of the inspection meeting was based on the synergy effects detected by Fagan. He believed that the number of errors found by individuals cannot reach the results of a joint inspection. However, there was one more important reason in our opinion for establishing a compulsory group review meeting: web applications and computer tools, which can fully substitute the mechanism of physical meetings, were not available at the time of Fagan. Therefore, the group review meetings appeared to be the only available solution for effective inspection processes. As supporting facilities and tools became more and more advanced and available, inspection process became more and more computerised. For example, computerised checklist was introduced in the inspection process in the early nineties. Reviewers were encouraged to inspect the documents in an individual preparation phase while meetings were confined only to discuss faults found by the reviewers [32]. Later more complete support from computerised tools became available which not only included checklist functionalities but also annotation support, automatic defect detection, distributed meeting support, decision support and metric collection [3, 6, 8, 9]. A good comparison of these tools is available in [33]). Current tools and supporting environments are also able to record data regarding effort devoted by reviewers during the process as well as all the support for geographical separation of development teams [3, 6]. Discussion on formal meetings in Software inspection: Apart from the advances in the support to the process, some researchers are still discussing the usefulness of meetings in inspection processes and these discussions are not at all conclusive. A lot of debate in favour of and against meetings in inspection has been held so far, starting with examples like [34] [35] in the nineties. A group of scientists and practitioners fully support the meetings in the inspection process [19] [36]. Another group of researchers emphasised that meetings do not really provide much value in improving the quality of the product, so they should be avoided [31]. However, the value of meetings depends on different factors such as the existing experience at the different levels of management, the availability and functionality of supporting tools, the limitations and circumstances of the project, the risks, etc [37]. On one hand, the number of available software professionals is scarce (sometimes 5-10) in small scale industry, so the inspection process can end up being shared by few people and the steps for the ideal inspection process cannot be implemented very strictly. On the other hand,

large-scale organisations running large projects would require a rigorous inspection process. However, it is possible that the shape of the meetings may change depending on the available technology [37]. With the help of modern tools, developers and reviewers can interact even if they do not physically meet. Even if features for supporting meetings are not available in the available supporting environments, it is very easy to create special blogs and chat rooms for this purpose. It is also possible that, e.g., a developer could fix all the defects marked by reviewers or discuss specific problems in the comments made by a reviewer using chat, voice talk or video conference. These examples clearly show that we may neither totally ignore meetings nor implement them in a very strict way. This point motivates us to explore the practical aspects and usefulness of meetings in inspections, specifically analysing when and where they are required and in what form (i.e. physical meetings or meetings using ICT applications and collaborative tools).

Other Related work with Software Inspections/Meetings: Literature has proposed different suggestions to reduce the cost and the duration of inspection meetings to make the process more effective. Pioneering work by Parnas and Weiss [32] has already suggested performing defect detection during the individual preparation phase to minimize the duration of the meetings. They even suggested that conversation should only involve one reviewer and the developer at the same time while others listen, although criticisms suggest that it is not easy to understand the logic of the meeting when other reviewers are merely acting as audience. Later in the history of inspections, computer-based techniques appeared as support for the process. A good comparative study of tools for inspection process was presented by Macdonald and Miller [38] in 1999. The industrial experience in using group support systems was reported in [39] and Hedberg [10] presented the perspectives on a next generation of inspection tools in 2004. The research on tool support is also summarised in a major survey on inspections published in 2009 [30] while a recent work [40] proposed a new scheme. The consideration of the above points together with the discussion in previous paragraphs represents the main motivation for our proposal of a simple model for an inspection process, aimed at being flexible, feasible and cost effective. The model will emphasise the support to SMEs, helping them cope with problems of availability of software professionals and resources while they handle extreme time pressure and shortage of budget. The goal is that SME should not devote significant extra budget for implementing an inspection process.

In this section, we have presented a brief description of the evolution of inspection since the seminal works. Inspection literature embraces hundreds of papers as well as reviews [38] and surveys as mentioned above [28] [30] [41-42]. Our aim has been extracting the most practical conclusions and acceptable solutions to fulfil the real needs of companies.

3. A Practical Model for Software Inspection As commented in the previous section, inspections have proved to be one of most effective techniques to improve the quality, schedule, and cost of developing software. However, after 30 years from its inception, it has not become an integral part of software development life cycles in actual practice [18]. Different studies [21-24] have shown that only a minority of companies have adopted this practice in their development processes. Large companies often ignore the possibilities of inspection because they feel unable to assess the benefits [25], especially in the short term; SMEs tend to discard being motivated by their limited resources and budget. In the following sections, we present how an effective, practical and cost effective inspection process can be carried out for SMEs and large companies alike. This section is divided into four subsections and the whole process is presented in seven steps. The first section 3.1 prepares the basis for the model and provides the fundamental three steps which are necessary for both SMEs and large companies (Fig. 1). Section 3.2 and 3.3 introduce the sub model/flow chart which describes software inspections for SMEs and large companies (Figures 2 and 3). The last section 3.4 provides the remaining steps which are common for both SMEs and large companies. This section also presents our simplified model (Figure 4) which in fact is the integration of the inspection process described in sections 3.1-3.3.

3.1 Initial Inspection Process

The proposed model starts with the following three steps (also described in Figure 1).

Step 1. Start of the inspection process- self-review of the module under inspection

A code inspection process starts when a developer realises that he/she has completed the code or part of the code. At this stage, we strongly suggest that the developer him/herself reviews the code with the help of the available tools to detect general defects. A number of tools are available for improving the quality of the code (e.g., CheckStyle [43] and Jlint [44] for Java). These tools for static analysis can also identify parts in the code that are defect-prone. Such analysis includes calculation of source-code metrics, detection of potential bugs based on defined patterns, and discovery of violations of coding conventions and rules [45]. With the help of these tools one can easily detect coding bugs and refactor/redesign accordingly. Refactoring is a well-accepted practice for improving existing code [46]. The Clang Static Analyser is a good example of a tool for detecting bugs in C [47]. It is worth mentioning here that static analysis tools can analyse several computing properties but most of the tools used for this purpose are used for bug finding [48]. This step will reduce the subsequent effort of reviewers. After self-review, the developer has to inform the team leader that entry criteria for inspection are met so that the formal inspection process may begin.

Step 1

Step 2

Step 3

Self-Review of the items to be inspected with the available tools

Take decision for the type and way of inspection process

Appointments of reviewers based on: 1. Size of product 2. Size of company

Figure 1: Proposed initial three-step approach to software inspection

Step 2. Decide the type and method of the inspection process: In the second step, the team leader is

responsible for taking decisions on the type and method of the inspection process. If the code size is too large and cannot feasibly be inspected by one reviewer, the code must be partitioned into small modules. Partitioning the product for the inspection process was already suggested by Parnas and Lawford [49]. In the present scenario, the software is complex in size, e.g. software systems for embedded control system may contain hundreds of thousands of source-code lines, several functions and hundreds of classes, which require a good number of software developers. Naturally, after developing the code, it must be partitioned into small modules, which are then inspected by several reviewers. These modules may be groups of classes or functions. On the other hand, if it is a small project in terms of size and functionality, there may be no need to partition the product. Step 3. Appoint the reviewers: The next step is the appointment of reviewer(s). If the product is small, only one expert reviewer might suffice. If the product is large then it may require a team of reviewers. The selection of reviewer(s) at this stage is an important task for the team leader. The reviewer(s) must be experienced and should be ideally selected from personnel outside the development team. Sometimes, it may not be possible to find suitable external reviewers for numerous reasons. Keeping this problem in mind, we suggest the following steps for the inspection process:

Our model recommends different approaches for the selection of the reviewers and the inspection processes for SMEs (Step 4.1) and for large companies (Steps 4.2). As commented earlier, conditions in SME (limited resources: less number of employees, few experienced professional, limited budget etc,) may suggest a specific philosophy which is very different that the one large company may be able to afford. We wish to state that our guidelines for large scale companies are only addressed to those companies who are not already adopting any inspection process.

3.2 Software inspection process for SME The percentage of SMEs in every country is far greater than large software enterprises. However, as commented above, resources and cost constraints reduce the number of SMEs who implement quality standards [21] [28]. This section introduces a simplified and more practical approach to inspections with the following simple steps (see also Figure 2) which are a continuation of the first common steps presented

Start

4.1.1 No

4.1.2

External reviewers available with management consent?

Yes

Available experienced staff?

Appoint as peer reviewer Appoint external peer reviewer

No Yes

Mutual inspection?

4.1.4

Yes

No

Most experienced team members?

Other developers apart from the original developer of the code should carry out inspection

Yes

Inspect small modules and modules that are more prone to errors

Conduct review

peer

No

Developer of the code should inspect the code with available tools himself

Go to step 5 in Figure 4

Figure 2. Inspection process in SME

earlier, i.e. we will start

this section with appointment of reviewers in the scenario of the SME.

Step 4.1. For the SME, appointment and selection of inspectors and the inspection process heavily depends on the situation and circumstances of the company. Thus we propose the following steps: Step 4.1.1. Appoint an expert from outside the company. Step 4.1.2. If the external option cannot be accepted by management then an experienced person from the company has to be selected. He/she may be the leader of a group, an experienced person or a company staff member, but he/she should not be a member of the specific project team. This approach is similar to peer review [4]. In fact, the terminology peer review is also used for inspections in the CMMI model. Step 4.1.3. If the second step 4.1.2 is not applicable due to unavailability of suitable candidates, or due to extreme work pressure on these people as they are usually experienced and senior professionals), inspection should be done mutually (changing the roles of development team members, i.e. the developers act as reviewers and inspect the documents produced by others). In this case, they act fully as reviewers and use all the tools and checklists which are required for an ideal inspection process. Step 4.1.4. If step 4.1.3 is not applicable due to time pressure or lack of professional expertise of the inspection process among the software developers, it is suggested that the most experienced team members assume the inspection process for short documents or small modules. The emphasis should be given to the most defect-prone parts in this step. The defect-prone modules can be easily identified by the team leader, frequently using metrics and indicators taken from the analysis with tools (as stated in step 1 of our model in Figure 1), following a Pareto philosophy. The team leader is the best one to decide and to identify as candidates the most complex modules (in terms of functionality) as well as the modules developed by new untrained, inexperienced software developers. Step 4.1.5. Finally, if the fourth step 4.1.3 is not possible then the developer of the code him/herself tries to inspect the code with the help of all available tools and checklist taken from the literature and web [47] [49]. He/she should also be careful in inspecting modules of the code which are the most complex and error prone.

3.3 Inspection process for large software companies In most of the established companies, e.g. Tata Consultancy Services, Telcordia Technologies and Granter Inc, IBM, Raytheon, Motorola, Hewlett Packard, and BullHN (most of them gained the CMMI level 5 recognition) teams follow the standard inspection process or their adaptations. Most of the time they have separate inspection teams which always look after all the aspects of the inspection process. We do not have suggestions for them. However, some large organisations are still not adopting the inspection process [18]. The following steps are suggested for them. Start

4.2.1

Assign task to experienced persons who further delegates to other experienced persons

4.2.2

Use of tools principally in the inspection process

4.2.3

Create a communication forum where it does not exist in the inspection tool

Go to step 5 in Figure 4

Figure 3. Inspection process in large companies

Step 4.2. These steps for large companies (see figure 3) suggest that after analysing the practical aspects of inspection implementations in large companies:

Step 4.2.1 The leadership of a full inspection process should be allocated to an experienced person who will be responsible for organising, managing and controlling the inspection activities. This person will appoint the team members and allocate the inspection tasks and coordinate them. An experienced leader will be more productive in inspection process because he/she benefits of knowing other experienced people with different expertise in the field so he/she can request their collaboration in case of need. Step 4.2.2 Use of tools in inspection process should be promoted at the maximum recommendable level [50]. In fact, many automated sub-activities can be accomplished without special skills using software or other convenient sets of tools, including static analysis software, role assignment environments and automated checklists [45]. Step 4.2.3 Communication between the members of the inspection team and the software development team might rely on blogs for discussion (only in cases of low availability of communication devices to be used by reviewers) and also chatting, voice talk or video conferencing resources without extra efforts. These techniques are the alternatives to the rigorous inspection meetings and also save time, money and resources. Additionally, the time and the duration of the meetings should be carefully planned to avoid conflicts in agendas and also to minimize the meeting duration.

3.4 Integrated view and final steps of inspection process In the previous sections, we have presented two different paths in the inspection process for SMEs and for large companies. In this section, we present the remaining stages/steps of the process and show an integrated view of the process in Figure 4.

Step 5. After the inspection, the summary of the review should immediately be transferred to authors/developers. If the reviewers and development team are using special blogs or similar alternative tools, then it is possible that, in parallel, the developer can also modify the code or the document. If she/he

detects

any

problem,

she/he

can

communicate

with

the

reviewers

Start Product

1

Self review by developers using available tools

2

Take decision for the type and way of inspection process

3

Small product size

Peer review expert reviewer

Large product size Selection of reviewer Partition the product

by

Small scale companies

Large scale companies Company type

No

Yes

External Reviewer available? Available, experienced staff?

Yes

Each part is reviewed by different expert reviewer

No

A

Yes

Mutual Inspection

No Yes

Experienced development team member? Mutual inspection?

Assign task to experienced persons

A

Use of tools principally in the inspection process

Peer review by expert reviewer Feedback to developer for modification

5

Developer informs reviewers after modification

6

Yes

No

Reviewers satisfied?

No

Developer of the code should inspect the code with available tools himself

Create a communication forum where it does not exist in the inspection tool

Meeting required with developer or group leader and reviewers

All reviewers satisfied? Yes

Final decision by group leader or take expert opinion and rework by developer according to decision

Finish review process

immediately.

} 7

No

Figure 4. Simplified model and flow chart for software inspection

Step 6. After modification of the code or document, the author/developer informs reviewers and if reviewers are satisfied then the inspection process may finish. Step 7. Otherwise, the code should be first returned to the author/developer for further modifications. If problems arise, it is necessary to instigate a new meeting with team leader, author/developer and specific reviewers. Meetings should not be restricted to physical meetings and the duration of meeting should be fixed, depending on the severity of the comments. The meeting can be organized with the help of videoconferencing systems. After physical or virtual meetings, if reviewers are satisfied, the inspection process is completed; otherwise the final decision is taken by the team leader. It is also possible to request the opinion of experts on the disputed issues and accordingly changes in the code.

A comprehensive and combined view of the inspection process for both small and large scale companies is shown in Figure 4. In fact, Figure 4 represents a mapping of all the steps/guidelines given for SMEs (section 3.1) and large companies (section 3.2). Figure 4 also represents the combined picture of Figures 1, 2 and 3 which are already explained in the previous paragraphs and sections.

In our model we assume that medium scale industry falls either into small software industry or into large scale industry depending on their circumstances. Therefore they should decide following one of the two options presented above depending on its situation and culture.

4. Validation of the Model The practical value of any new model cannot be accepted without a validation process. The model should be applied in a real environment to check its practical acceptability. In our case, we have been able to experiment with the model in two companies. We have also compared our model with recently proposed available models used for inspection processes.

4.1. Initial validation The majority of the software companies in Europe and Asia are owned by the private sector. One of the authors of this paper spent more than 10 years in consultancy with software industries in Spain (especially in the implementation of software quality programs). He is still collaborating as a consultant

in some multinational companies, mainly in Spain. Our proposed model for inspections was implemented and evaluated in two companies with strong software projects located near Madrid. Given restrictions by companies in accepting guided implementations, the opportunity for experiencing our proposal was limited. Obviously this influenced the selection of cases and subsequently introduces a bias in case studies. Moreover, limited data was allowed to be published due to confidentiality clauses while an official application of experimental design was not possible because management expressed their fears about disturbances in organisational dynamics and performance. Inspired in recommendations for case study reporting in [51], we have included some of the basic parameters of the two case studies in the following list: •

Rationale and goal: checking feasibility and results of the proposed model thus confirming or rejecting these characteristics at least in an SME and in a large company



Analysis unit and case selection: software organisations, selecting two, which accepted to experiment with the proposed method, one for the SME variant and one for the large organisation one.



Research questions: is the proposed method feasible to implement in each type of scenario? Does the proposed method offer good results in terms of defect detection and cost efficiency? Is the proposed method accepted by participants and management?



Data collection: given the restrictions to interaction within the organisation and the preservation of confidentiality, although some direct observation was allowed, the data collection was achieved by the analysis of documentation (reports) provided by involved managers. The reports included the data and metrics collected during the process through the established procedures in each organisation. A direct structured interview with the involved managers and QA specialists was also used.



Analysis procedures: the reports created by QA managers were agreed with them and the procedures for processing data were reviewed together.

The descriptions in the next two subsections are always including three main paragraphs: descriptions of context, description of implementation with references to the specific guidelines of the method, and a final description of results with some quantitative data. Table 1 has been created as a short reference guide.

4.1.1. Case study 1: Implementation in a medium scale company The context of the first case study was the implementation of inspection in a Spanish medium sized organisation (around 160 employees but with only 25 devoted to software) which develops embedded software for electronic systems. All projects/subprojects are usually managed with teams of 4-5 developers and have a size around 15,000 LOC mainly in C (90% or more) and with small parts in assembly language. Due to restrictions in human resources, the quality assurance activity in this organisation was centralized via an expert in charge of general planning and management with the help of an engineer assisting development teams. It was not possible to adopt a 100% code inspection model in a traditional way given the scarcity of resources. However, the QA manager needed to have a stronger scheme of verification and validation to assure results in the projects so he decided to go to an affordable and simplified model for review (apart from improving testing procedures in parallel). The organisation was experiencing a poor culture of reviews according to reports of the QA manager, something which would have made it very hard to select possible inspectors to implement a traditional inspection process.

For the implementation, they relied on a version of our simplified model described above where static analysis tools help to identify defect-prone modules as well as to check the entry requirements for inspection: clean compilation report and successful check of code-style analysis tools and code metrics (e.g. Cyclomatic, traditional size code metrics, etc.) with values below the established threshold: these limits had to be determined after several months of adaptation and pilot studies starting from the theoretical values recommended for certain software metrics. So developers (step I) perform self-review and once criteria are met they add reports of tools and compiler as proof. Inspection coordinator (the QA engineer) inspects the data using continuously updated and improved checklists and does a preliminary short desk-checking before deciding, in coordination with team leader (step II), partitioning and selection of modules and deliverables to be formally inspected adopting a Pareto strategy: applying a formal inspection effort to a percentage of code and deliverables with higher defect-prone indicators. As step III, the majority of software parts are small and are only inspected by a QA engineer in a similar way to desk checking (option B of our model for small organisations). Only specific outstanding defect-

prone modules are inspected formally with the participation of several inspectors (mainly with the help of team leader and QA manager). All the communication with reviewers and author/developer is supported by a collaborative work system to avoid physical meetings. The system allows communication of results to authors/developers and also enables collection of metrics for evaluating performance and quality statistics. The values of metrics and inspections are also used for guiding intensification of test efforts on defect-prone areas according to a Pareto philosophy (fulfilling the second goal of improvement of software testing efficiency expressed by the QA manager).

After a pilot period of 4 months and given good results in terms of detection of defects, the process was implemented and also extended to some projects in Ada. It still runs now after some years. According to qualitative analysis of interviews to QA manager and the QA assistant, after the pilot period, developers have understood and accepted the process while at the beginning they were extremely reluctant, especially fearing an overload of effort. The information from these two people was exposed to triangulation with records of an informal survey to developers carried out by them and validated according to the principle of saturation of information. Defects arriving to testing were reduced by 27% while they did not incur in additional costs in staff (which was a prerequisite imposed by top management). However, a detailed analysis of hidden or non-apparent costs (e.g. effort devoted to develop procedures, customizing tools, etc.) was not performed.

4.1.2 Case Study 2: Implementation in a large company The context of the second implementation of a version of our model was carried out in a R&D division of a large multinational telecommunication company in Madrid. Management decided to implement software inspections in all projects (normally 4-6 people involved) as an additional measure to an improvement program for software testing partial automation: it was realised and accepted that the type of defects to be detected by each technique are complementary and that non-executable products need a more formal control. The organisation began with a traditional review process minimising meetings (only celebrated in case of need) as it was difficult to commit the presence of reviewers. This was really disappointing as the moderator spent a lot of time in compiling and analysing all the comments. Even more, tracking the results was hard for the moderator and he/she was unable to get any valuable help in this process.

Table 1. Summary of characteristics of the two case studies

Case studies Context

Implementation

Medium size

Large company

150 employees and 25 in software.

Hundreds of employees. Up to 150

Typical project size: 15-20KLOC.

reviewers involved. All types of

Embedded software.

projects.

Simplified model

Model for large companies with some efficiency options

Qualitative data

Quantitative data

Interviews to QA manager and QA

Interview to manager and QA

engineer. Comments from

engineer. Comments from team

developers

leaders and reviewers.

Primary: defects, software metrics

Primary: interest index, defects,

Secondary: size

numbers of reviews and reviewers, software metrics Secondary: size, number of projects and employees involved, devoted time

Main results

Successful implementation and -27%

Successful implementation,

defects to testing

satisfaction of authors (high acceptance index), short time (25') per each defect

As a result, the implementation started when the manager appointed a QA engineer (advice 1 for large companies) who helped team leaders to implement inspection processes combining two people from project (not authors/coders) with experts from QA and testing divisions. Again, a combination of different tools (static analysers, metrics tools, style checkers, etc.) was devised as a first step (advice 2 for large companies) for authors to gain access to inspect their software: successful inspection is a compulsory step to reach the consideration of "completed" for a development unit (step I). Again, a communication system enabled coordination of reviewers and inspection leader (QA engineer) as well as recording and communicating findings to authors (advice 4 for large companies). Short physical meetings were reserved for discussion of unsolved items remaining after virtual interaction (advice 3 for large companies).

Inspections also included documents related to requirements specification, design, etc. using also specific entry controls like validation of models by CASE tools and compliance to document format and contents (by a QA person). An approximated average number of 10-12 inspections per month were involved. The implementation created some reluctance in project leaders at the beginning (seeing it as an additional workload for their teams and for the project) but after refining support for efficiency (restricting meetings at the minimum level and with reduced work thanks to entry criteria checking), benefits were perceived soon. Given the advanced culture of measurement in the company, more detailed data has been available for the evaluation of the process. The QA Engineer devised an indicator on the quality of improvement proposals (IP): it helped to understand the value perceived by authors of the usefulness of IPs. It collects from authors their view of interest of each proposal through an ordinal Likert-style scale (5 points scale: from not interesting at all to very interesting) and combining the weighted number of IPs of each type. A first set of reviews (78 from 17 projects) was carried out during 6 months without working with all the aids of the refined process (tools for previous analysis, coordination with communication and information management tools, etc.) although some simplification of the ideal inspection process was deployed. We used an ordinal scale where it is recommendable to work with distribution among the values of the scales but as a summary we can tell that the average of perceived value was less than 2.0 (after some initial period of adjustment). After the first period of 5.5 months, 148 reviews were performed during almost 18 months using the refined approach leading to an average value of 3.57 in perceived value (even reaching 4.4 in last 30 ones) and also raising acceptance of IPs by authors from 50% to more than 75%: the IPs decreased from an average of 40 per review to a number around 20 but they were much more useful and efficient for authors, reducing the number of proposals which do not provide value to developers. The average time per each valuable IP was around 25 (less for all types of IP); lower than the 1 hour average of literature. These numbers help convincing managers and other actors (up to 148 different reviewers intervened) of the benefits of the method after a first period of criticism based on the expenditure of time of professionals and the lack of evidences of the perceived value for them. In general, we performed a qualitative analysis of interviews to key participants as well as comments from reviewers and team leaders which were recorded by the system using a free text field. We concluded after reaching saturation of information that thanks to initial interest and effort of reviewers, a new culture has been adopted: project managers now accept reviewing as a normal

activity, reviewers participate with high morale and authors/developers are surprisingly reacting quickly to reviewers’ comments.

4.2. Comparative analysis: Comparison with other models The value and usability of a new model should also be established by comparing it with similar models and checking that it provides better results in comparison to them. In this section, we present a discussion on the recent works on inspection and the comparison with our proposed model. We want to remark that our model is not a tool for a specific type of inspection but a set of simple guidelines, which are very practical, easy to adopt, cost effective, and applicable to all types of software organisations.

4.2.1. Case Selection: The literature provides a huge number of articles on inspection processes and tools as Kollanus and Koskinen [30] shown in their survey on the area with 184 related articles. We have limited our work only to those works proposed in recent years and published in impact software engineering journals and conferences. Denger [9] proposed the tailoring approach for Quality-Driven Inspections (TAQtIC) which allow organisations to implement inspections in a sustainable way in a given organisational context. Our proposed model also works on the same line but the difference is that TAQtIC suggests implementing a process which includes almost all the steps of ideal inspection depending on the context of the organisation: our model suggests real implementation by software developers when there are no resources and expert professionals, using the available tools and inspecting the artefact by exchanging their roles mutually. Sami Kollanus [52] proposed a model named Inspection Capability Maturity Model (ICMM). He claimed that his model provides support for assessment and improvement of inspection practices in a software organisation. The model suggests how to implement it and improve the inspection practices (in a 5 steps scale such as CMMI). However, the methodology does not explain how inspections can be performed in SME with evident lack of resources and professionals, unlike that which happens with our proposal. In another work, Stalhane and Awan [53] suggested three points to improve the inspection process: 1) use of large inspection groups in order to have access to a wide diversity in expertise and experience, 2) hands-on experience is more important than general knowledge and experience and 3) when left to their own criteria, large groups tend to use a voting-like mechanism when deciding what to adopt after the group meeting. The

suggested points are effective but again organisations with limited resources could not support these large inspection groups. Table 2. A comparison of the proposed model with others

Different frameworks

TAQtIC

ICMM

Hashemitaba

[9]

[52]

[54]

Utilisation of tools freely available Utilisation of local available resources (e.g. inspector, team members of development team) for inspection Time taking job Tool support Domain specific

Yes

Yes

Costly

Costly

No Partially

Yes Yes Generic

No Partially

Yes Yes Generic

Web usability

inspection

The

ADAMS

Formal

inspection

performance for

+

proposed

Specifications

technique

UML diagrams

WAIT

framework

[56]

[57]

[58]

[60]

Yes

Yes

Yes

Yes

Yes

Mixed

Costly

Costly

Costly

Costly

Costly

Moderate

No

No

No

Criteria

Dedicated team for inspection Cost

Model-Based

Not clearly defined

Yes Yes Generic

Partially

Yes Yes Inspection for users requirement

Not clearly defined

Yes Yes Inspection for web design

No

No

Yes

Yes

Yes

Yes

Yes Yes

Yes Yes

No Mixed

Inspection for UML diagrams

Web-based artefact inspection

Generic

Hashemitaba [54] proposed a system exploiting the capabilities of collaborative and knowledge environments. The system improves continuously by creating swap iteration in inspection model kernel. It is based on creating and modifying rules related to defects and adding intelligence and learning features. This system for detecting and removing defects was based on knowledge engineering principles. Even more importantly, the software system [54] was experimented within a real software project but it is not easily acceptable to SMEs due to the difficulty of having professionals knowledgeable in learning rules management: our model offers more simplicity and flexibility. The same happens with Ohgame and Hazeyama [55] who designed a inspection support system for UML diagrams which requires knowledge base systems which require adding and modifying rules related to defects as well as adding intelligence and learning features to the model. Shaoying et al. [56] proposed a rigorous method for inspection of model-based formal specifications, promoting well-defined consistent properties of the model as well as precise rules and guidelines for inspection. The well-

defined expressions derived from specification and human inspectors’ judgment were used to find errors during the inspection process. Shaoying's method is specifically designed for inspecting the specification of users’ requirements while our model is more generic and can be used for requirements, design and code inspection. The same lack of generality happens with the web usability inspection technique based on design prospective by Conte et al. [57] or the approach of Ferreira et al. [58] for inspection on UML diagrams. De Lucia et al. [59] evaluated the effect of inspection in distributed environments using two scenarios. In the first experiment, the authors compared the distributed inspection method applied to a disciplined but not flexible method (i.e. the Fagan's inspection process). The same experiment was repeated in a distributed environment but in a flexible but not disciplined method. The comparison of the two experiments shows that methods that are more flexible require less time to inspect a software artefact. This result is consistent with our approach but the De Lucia's one does not consider the restrictions of SMEs. In a related work, the same authors proposed the combination of an advanced artefact management system (ADAMS) with a web-based artefact inspection tool (WAIT) which supports distributed inspection process [60]. The empirical investigation shows that the integrated system provides an effective support for the management of the quality of software artefacts. In our model, we have already suggested to use the available tools for the inspection process so the integrated tool (WAIT+ ADAMS) maybe a valuable tool for distributed processes. There are several other related works (Kelly et al. [61], Shin et al. [62], McNab et al. [63], and De Lucia et al. [60]) where authors have tried to improve the inspection process with different techniques. Several researchers [3, 7] proposed different tools to manage the inspection process in distributed environment. Perry et al. [7] developed a tool named Hypercode, which is capable of reducing the inspection time in distributed environments. Shin et al. [62] investigated metrics (complexity, code churn, and developer activity) which allow detection of vulnerabilities. Kelly et al. [61] suggested an approach to software assessment that combines inspection with code execution. De Lucia et al. [60] combined advanced artefact management system (ADAMS) with web-based artefact inspection tool (WAIT) finding out that the combined system provided an effective support for managing quality of the software artefacts. McNab et al. [63] realized that skill level of the involved workforce impacts in cost and time consumption: they suggest an approach to benefit the production of the required documentation which is not easy to be adopted by the SME.

4.2.2. Data collection and analysis procedure

Table 2 presents a comparison between our proposal and the above mentioned models showing the benefits according to several criteria: use of a dedicated team for inspection, cost of the inspection process, use of freely available tools instead of specific expensive ones, use of local human resources for the process, etc. These criteria has been selected according to the most important and common features which are required for any inspection process. No special references re: evaluation criteria for inspections are available in literature except the ones created for the evaluation of supporting tools (e.g. Mac Donald & Miller [38]), which are not applicable to our proposal, as it is not a tool but a set of selected criteria to be used as guidelines.

4.2.3. Results We have implemented our model in two software organisations as case studies. Previously these companies had ad-hoc approaches for inspection/review: in the first one no review processes were implemented (only applying inspection in case of problems) and in the second one a first period of work occurred with no convincing results. The organisations adopted our suggestion for inspection and both companies experienced benefits of launching their alternatives to expensive traditional inspection processes. Although the model is applicable to all types of industry, it is more useful for SMEs and small software organisations where there is shortage of qualified professionals and the companies are not able to perform the ideal process: our proposal is mainly deployed at the developer/team level without involving experts from outside the company. This was the case in our first implementation where a manager associated with a senior software professional inside the development team performed review. Before starting the review, all the developers were instructed to check their items using analysis tools. Later the team of two members did their task for review (concentrated more on error prone modules, also for testing). Our model also suggests guidelines for inspection in relatively large software companies. The second case study revealed that suggestions of the model were found easy and effective enabling a workload affordable by the teams while the first implementation of non-refined inspections caused severe disturbance and rejection. The comparative analysis also proves that our model in comparison to the other models is relatively more flexible, cost effective, tools supported and less time consuming. The model also suggests performing

inspection using available resources in case of critical lack of an expert team for software inspection. The advantages of the proposed model which differentiates it from others are summarized as follows: 1. The proposed model considers the all-practical problems of SME and recommends easy and feasible steps, easy to adopt by the non-large organisations. 2. The proposed model can be applied for inspection in all phases of software development (not only for code) as well as Inspecting all other types of software development, e.g. in Web-Engineering. 3. Our model may be an ideal inspection technique in agile development process. Agile developments are becoming more common in practice. Bahsin [64] stated that traditional inspection methods are heavy weights which do not suite agile development. Our proposed model also suggests that the developers should inspect their software by themselves (regularly/daily basis) with the available tools. 4. The proposed model is a cost effective one as shown in the results of the two experiences and reduces significantly the expenditure associated to inspection. 4.2.4. Threats to validity We could not compare our observations in parallel implementations of traditional inspection and new model because the companies did not accept such a formal experiment approach: actual comparative data were restricted to comparison of first and second period of implementation. Although a traditional analysis of threats to validity is difficult to perform for the two case studies, we have included here some reflections. Regarding general threats to validity, the first one is the selection bias in case studies. Due to limited opportunities to intervene in organisational processes, the two cases were the best acceptable options we had. However, a mitigating factor is that we could work with an SME with the simplified model of inspection for small organisations and with a large company with the more complex process thus checking at least one times both options. No bias was detected within each of the participating organisations as all the data collected during regular performance of the experiences were considered for the analysis. When referring to validity of the qualitative research [65] [66], according to the opinions of individuals interacting with us, the results reflect the perceptions of the people involved in the experiences. A simple triangulation with limited cross-checking between collected opinions and observation tend to confirm the statements. Following some recommendations on validity criteria related to qualitative information [65], we can also report the clarity of research questions, triangulation of data, reference to related research and

reflections on lessons and applicability. The analysis of interviews and comments was done incrementally until reaching the saturation of information [67]. In the case of quantitative data (some of them secondary data), the collection was reviewed together with quality managers analysing the recording systems and the procedures. Generalisation of results is not guaranteed but the implemented processes and the context and working conditions are quite typical of other organisations so similar implementations are possible for replication. However, results may be impacted by many factors including the ability of managers to organise the processes and the resources as well as the existing culture of quality, organisation, measurement and maturity of processes. Of course, given that we work with guidelines, all implementations should be done with a preliminary process of adaptation to the intended setting (as it was done during the case studies).

4.3. Summary and Discussion In summary, the proposed model suggests easy and feasible guidelines/steps for performing inspection and these guidelines depends on situation and circumstances of the project and company. It is not a specific inspection tool which can be applied on target inspection items and the obtained results can be compared with the results of other available tools which are applied on the same items. In fact, a tool can be designed and developed for our model by combining several alternative solutions, where these alternative solutions also represent different tools for different purposes. However, if we will replace all our alternative solutions with tools and combine them to produce a new tool to perform a full software inspection, it again increases cost, which is not suitable and acceptable for SMEs. Furthermore, already a number of tools are available in market but majority of them are not used by SMES due to costs, there is no logic in developing new and costly integrated software for inspection process. Doing this will not solve the real problems of SMEs.

5. Conclusions and future work

A critical drawback for inspection is still their weak implementation in the software industry. As shown in our analysis of literature as well as in conclusions of direct experience, there are several reasons why ideal inspection process is not adopted but the two main ones are the cost and the time spent in the inspection process what it is a real problem for the SME. This article recommends a model with a simplified inspection

approach aimed at being a solution for organisations with scarce resources. It was realised in two software organisations (one medium and one big company). We collected data and information which confirms that benefits like cost effectiveness, efficiency and feasibility were achieved. The most important feature of this simplified model is flexibility for selecting the most suitable process. It suggests that in all cases the software should be inspected even in the absence of qualified inspectors, review teams, etc. In such cases, the developer of product is advised to inspect their products with the help of other software developer(s). In the worst case, if software developer(s) is (are) not available due to time constrains and/or workload, the product must be inspected by the developer him/herself with the help of available tools. Good results in practical application to different settings suggest that the proposed model could be a valuable contribution to extend inspection adoption. However, the method also shows some limitations and disadvantages. The approach is a generic model which cannot be very useful if it is applied without adaptation to a specific domain. For example, in case of inspecting a project using UML artefacts, the model can be applied but it does not suggest any particular inspection criteria for evaluating UML models. As it happened in the two case reports, an initial effort should be devoted to creation of checklists and criteria for the corresponding type of review. Another limitation is that the proposed model is not a tool but a process: success will always depend on the available resources (software/tools, people, etc.) although this risk is lower than in traditional inspections because it demands fewer resources. A possible risk may occur due to the flexibility of the proposed model. The management or the software development team may misinterpret this flexibility of the recommendations when they really do not want to implement inspections even when the required resources are available. As a consequence of our experiences, we are proposing the following future work: 

More experimentation and case studies are required to confirm with more data the evidences of benefits detected in the two experiences presented in Section 4.



Applicability of the proposed model in a Global Software Development (GSD) environment is also a task for future work.

The model can also be supported by an integrated software system apart from the tools which support automation of certain tasks. Analysis of the requirements of such system and its development or configuration from existing environments could be another line for future work in the line of some existing

references [39] [40]. Obviously, we consider modelling the process using the reference of SPEM [68] with the possibility of implementing it through the Eclipse Process Framework composer.

References 1. Fagan ME. Design and code inspections to reduce errors in program development. IBM System Journal 1976; 15(3):182-211. 2.

IEEE Std. 1028-1997. Standard for Software Reviews. The Institute of Electrical and Electronics Engineering. Inc. ISBN 1-55937-987-1. 1998.

3.

Lanubile F, Mallardo T, Calefato F. Tool support for geographically dispersed inspection teams. Software Process: Improvement and Practice 2003;8(4): 217-231.

4.

Wiegers KE. Peer reviews in software: A practical guide. Addison-Wesley2002;Boston.

5.

Harjumaa L. A pattern approach to software inspection process improvement. Software Process: Improvement and Practice 2007; 10(4):455-465.

6.

Remillard J. Source code review systems. IEEE Software 2005; 22(1):74–77.

7.

Perry DE , Porter A, Wade MW, Votta LG, Perpich J. Reducing inspection interval in largescale software development. IEEE Transactions on Software Engineering 2002; 28(7): 695-705.

8.

Anderson P, Reps T, Teitelbaum T. Design and implementation of a fine-grained software inspection tool. IEEE Trans. Software Engineering 2003; 29(8), pp.721-733.

9. Denger C, Shull F. A Practical Approach for Quality-Driven Inspections. IEEE Software 2007; 24 (2): 79-86, 10.

Hedberg H. Introducing the next generation of software inspection tools. Lecture Notes in Computer Science 2004;3009:.234-247.

11. Tyran CK, George JF. Improving software inspection with group process support. Communication of the ACM 2002;45(9):97-92. 12. Macdonald F, Miller J. ASSIST—a tool to support software i nsp ect i o n . Information and Software Technology 1999; 41(5):1045-1057. 13. Kemerer CF, Paulk MC. The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data. IEEE Transactions on Software Engineering 2009;35(4):534–550.

14. Wong YK. Modern Software review: Techniques and Technologies. IRM Press UK, 2006. 15. Damian D, Zowghi D, Vaidyanathasamy L, Pal Y. An industrial case study of immediate benefits of requirements engineering process improvement at the Australian center for unisys software. Empirical Software Engineering2004; 9(1-2):45–75. 16. Freimut B, Briand LC, Vollei F. Determining inspection cost-effectiveness by combining project data and expert opinion. IEEE Transactions on Software Engineering2005, 31(12)1074-1092. 17. El-Emam K. The ROI from Software Quality: An Executive Briefing, Ottawa Software Quality Assoc. 2004. 18. Stewart R, Priven L. How to avoid software inspection failure and achieve on going quality, cost, & schedule benefits.A Stewart-Priven Group White Paper 2007; 1-13. http://www.ieee-stc.org/proceedings/2007/pdfs/RS1691a.pdf (30 Jul 2014) 19. Gilb T, Graham D. Software inspection. Addison-Wesley Harlow, UK 1993. 20. Wilkerson J W, Nunamaker J, Jay F. Mercer, Risk comparing the defect reduction benefits of code inspection and test-driven development. IEEE Trans. on Software Engineering 2012;38(3):547 –560. 21. Pusatli T, Misra S. Quality assurance activities in small and medium software enterprises in Turkey: an empirical investigation. Technical Gazette, 2011; 18(3):447-452. 22. Ciolkowski M, Laitenberger O, Biffl S. Software reviews: the state of the practice. IEEE Software 2003; 20 (6): 46-51. 23. Denger C,Shull F. A practical approach for quality- driven inspections. IEEE Software 2007;24 (2):7986. 24. Johnson PM. Reengineering inspection. Communication ACM1998; 41 (2):49-52. 25. Stewart R, Priven L. How to avoid software inspection failure and achieve ongoing benefits'. CrossTalk: The Journal for Defense Software Engineering 2008; January: 23-27. 26. Chroust G, Lexen H. Software inspections - theory, new approaches and an experiment. Proceedings of 25th EUROMICRO Conference 1999; 2:286–293. 27. Thapliyal M P, Diwvedi P. Software process improvement in small and medium software organisations of India, International Journal of Computer Applications 2010; 7(12):37-39. 28. Macchi D, Solari M. Software inspection adoption: A mapping study. XXXVIII Conferencia Latinoamericana 2012; 1-8.

29. Wieger KL. Peer Reviews in Software: A Practical Guide. 2002; Addison-Wesley Longman Publishing Co. 30. Kollanus S, Koskinen J. Survey of software inspection research. The Open Software Engineering Journal 2009; 3:15-34. 31. Porter A, Johnson P. Assessing software review meeting: results of a comparative analysis of two experimental studies. IEEE Transaction on Software Engineering 1997; 23(3):129-145. 32. Parnas DL, Weiss DM. Active design review: principles andpractices. In Proc of 8th Int.Conf.on Software Engineering 1985; 132-136. 33. Macdonald F, Miller J, Brooks A, Roper M, Wood M. A review of tool support for software Inspections. Proceedings of the Seventh International Workshop on Computer-Aided Software Engineering (CASE- 95)1995; 340-349. 34. Votta LG. Does every inspection need a meeting?. SIGSOFT Software Eng. Notes 1993;18 (5): 107-114 35. Johnson PM, Tjahjono D. Does every inspection really need a meeting?. Empirical Software Engineering 1998; 3 (1): 9–35. 36. Sauer C, Jaffery R D, Land L, Yetton P. The effectiveness of software development technical reviews: a behaviourally motivated program of research. IEEE Transaction on Software Engineering 2000; 26(1): 1–14. 37. Misra S, and Akman, I. A Cognitive Model for Meetings in the Software Development Process. Human Factors and Ergonomics in Manufacturing & Service Industries 2014; 24(1): 1–13. 38. MacDonald F, Miller J. A comparison of computer support systems for software inspection. Automated Software Engineering 1999; 6:291-313. 39. Genuchten MV, Dijk CV, Scholten H, Vogel D. Using group support systems for software inspections. IEEE Software 2001; 18(3): 60-65. 40. Egorov A. Tool support for inspections. Master's thesis Technical University of Denmark, Department of Applied Mathematics and Computer Science 2013. 41. Aurum A, Petersson H, Wohlin C. State-of-the-art: software inspections after 25 years. Software Testing, Verification and Reliability 2002; 12(3):133-154. 42. Laitenberger O, DeBaud JM. An encompassing life-cycle centric survey of software inspection. Journal of Systems and Software 2000; 50(1):5-31.

43. http://checkstyle.sourceforge.net/(30Jan2014). 44. http://artho.com/jlint/ (30 Jul 2014). 45. Morisaki S. Are you using a toolset in your code review?.IBM2010; Feb: 1-4.availableat: http://www.ibm.com/developerworks/opensource/library/os-toolset/index.html (30 Jul 2014) 46. Duvall P. Automation for the people: Continual refactoring, Using static analysis tools to identify code smells’, IBM, 2008; July. Available at: www.ibm.com/developerworks/java/library/j-ap07088/index.html (30 Jul 2014). 47. http://clang-analyzer.llvm.org/(accessed inJan2014) 48. Moy Y. Static analysis is not just for finding bugs. CROSSTALK, The Journal of Defence Software Engineering 2010; Sep/Oct (5):5-8. 49. Parnas D L, Lawford M. The role of inspection in software quality assurance. IEEE Transactions on Software Engineering, 2003; 29(8): 674-676. 50. Duvall

P. Automation f o r the p e o p l e : Continues i n s p e c t i o n , f r e e y o u r s e l f f r o m

mundane , manual inspection with software inspectors. IBM 2006, August:1-7 available at: http://www.ibm.com/developerworks/java/library/j-ap08016/index.html (30 Jan 2014) 51. Runeson

P,

Host

M.

Guidelines

for

conducting

and

reporting

case

study

research

in software engineering. Empirical Software Engineering 2009;14 (2):131-164. 52. Kollanus S. ICMM — a maturity model for software inspections. J. Softw. Maint. Evol. 2011, 23(5):327-341. 53. Stålhane T, Husain A T. Improving the software inspection process. Proceedings of the 12th European conference on Software Process Improvement (EuroSPI'05), 2005 Ita Richardson, Pekka Abrahamsson, and Richard Messnarz (Eds.). Springer-Verlag, Berlin, Heidelberg,163-174. 54. Hashemitaba N, Ow SH. Generative inspection: an intelligent model to detect and remove software defects. Proc. Third International Conference on Intelligent Systems, Modelling and Simulation (ISMS) 2012; 688-691. 55. Ohgame Y, Hazeyama A. Design and implementation of a software inspection support system for UML diagrams. IEICE-Trans.Inf.Syst. 2006; E89-D (4):1327-1336. 56. Shaoying L, McDermid JA, Chen Y. A rigorous method for inspection of model-based formal specifications. IEEE Transactions on Reliability 2010; 59(4):667–684.

57. Conte T, Massolar J, Mendes E, Travassos GH. Web usability inspection technique based on design perspectives. IET Software 2009; 3(2):106–123. 58. Ferreira AL, Machado RJ, Costa L, Silva JG, Batista RF, Paulk MC. An approach to improving software inspections performance. Proc. Of IEEE International Conference on Software Maintenance (ICSM) 2010; 1-8. 59. DeLucia A, Fasano F, Scanniello G, Tortora G. Evaluating distributed inspection through controlled experiments. IET Software 2009; 3(5):381–394. 60. DeLucia A, Fasano F, Scanniello G, Tortora G. Improving artefact quality management in advanced artefact management system with distributed inspection. I E T Software 2011;5(6):510–527 61. Kelly D., Thorsteinson S, Hook D. Scientific software testing: analysis with four dimensions. IEEE Software 2 0 1 1 ; 28(3):84–90. 62. Shin Y, Meneely A, Williams L, Osborne JA. Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. IEEE Transactions on Software Engineering 2011; 37(6):772-787. 63. McNab D, McNab A, Robinson R, Toft MW, McDonald J. An intelligent approach to inspection qualification. NDT & E International 2005; 38(2):97-105. 64. Bhasin S.Quality assurance in agile: a study towards achieving excellence . Proc AGILE India (AGILEINDIA) 2012; 64 –67. 65. Easterbrook S, Singer J, Storey MA, Damian D. Selecting empirical methods for software engineering research. Guide to Advanced Empirical Software Engineering 2008; 285-311. 66. Dittrich Y, John M, Singer J, Tessem B. Editorial: For the special issue on qualitative software engineering research. Information and Software Technology, 2007; 49 (6): 531-539. 67. Mason, M., Sample size and saturation in PhD studies using qualitative interviews, Forum Qualitative Soc Res, 2010, 11 (3). 68. Ruiz-Rube, I, Dodero, J.M., Palomo-Duarte, M., Ruiz M. and Gawn, D., Uses and applications of Software & Systems Process Engineering Meta-Model process models. A systematic mapping study, J. of Software Maintenance and Evolution: Research and Practice, 2013; 25 (9), pp. 999-1025.

4.2

4.1