Lessons Learned From an Agile Development of a ...

2 downloads 2 Views 441KB Size Report
based on a combination of Design Thinking, Scrum, Kanban, and Test-Driven Development. Finally, we highlight the challenges we faced, ranging from how to ...

Having Fun with Customers: Lessons Learned From an Agile Development of a Business Software Zana Vosough, Matthias Walter, Jochen Rode, Stefan Hesse SAP SE Dresden, Germany {zana.vosough; matthias.walter; jochen.rode; s.hesse}@sap.com ABSTRACT

In this paper, we share our experiences of an agile software development practice that we follow at SAP SE for designing and developing a new standard business software – SAP Product Lifecycle Costing for SAP S/4HANA. We describe our interaction with co-innovation customers as well as our processes for planning, designing, and testing, which are based on a combination of Design Thinking, Scrum, Kanban, and Test-Driven Development. Finally, we highlight the challenges we faced, ranging from how to manage customer expectations to how to speed up test/build processes. Author Keywords

Agile Development; Design Thinking; Customer Engagement; Co-Innovation; Co-Development; Stakeholder Involvement; Test-Driven Development ACM Classification Keywords

K.6.1. Systems development, Systems analysis and design, K.6.3. Software development INTRODUCTION

The use of agile software development methodologies and close co-innovation with customers have shown to result in better products [11] – a view that we share based on our own experiences. Involving customers from an early stage brings value to the project in different ways. First, customers know the market needs intimately and can share their knowledge and experiences. Second, the close collaboration with customers facilitates the collection of requirements and thus maximizes the fit of the product to their needs through development cycles of design and validation. Less costly rework, higher customer satisfaction, and a readiness to respond to change are just some of the resulting benefits. Agile methodologies have already been described, from a theoretical perspective in several publications [2, 3, 8, 12], or from other domains such as computer-supported cooperative learning [7], or a state of the art review [13]. In this paper, we share our practical experiences and lessons learned from implementing a new business software based on a combination of agile software development methods. License: The authors retain copyright, but ACM receives an exclusive publication license.

Rainer Groh TU Dresden Dresden, Germany [email protected]

CONTEXT The Product

SAP Product Lifecycle Costing for SAP S/4HANA (PLC) is a solution to calculate costs for new products or quotations, to quickly identify cost drivers and to easily simulate and compare alternatives. PLC was developed in close collaboration with customers and partners over a period of three years. Development of new releases is ongoing. The Team

The project team comprises two sub-teams. Our five-people strong Go-To-Market (GTM)-team led by the Solution Owner is responsible for establishing and maintaining customer relationships, for producing marketing collateral, and for supporting sales. The development team led by the Chief Product Owner (CPO) develops the software. Requirements analysis and functional design is shared between the two teams and is done in close cooperation with customers. The development team is organized into four Scrum teams of about 10 developers each. Each of the four Scrum teams has its own product owner (PO) and scrum master (SM) and takes full responsibility for developing agreed-upon functionalities (user stories and backlog items). In addition, several colleagues have cross-team responsibilities, such as developing the business architecture, technical architecture, writing documentation, testing functional correctness, usability and performance. Two of the Scrum teams work in Germany and two in Romania. Literature describes many drawbacks of distributed teams [1]. We experienced several problems ourselves which is why we make an effort to have face-toface contact every 4-6 weeks. In addition, we use video conferencing for the times in between. Due to the high quality of recent systems this online communication works well for us. Video calls are not only used routinely for meetings but even for longer pair-programming sessions. The product owners are the team’s first contact for questions and workload management. They communicate the purpose and requirements of new features and manage the team backlog. The CPO represents the development team as a

whole and manages the overall product backlog. The scrum masters ensure process quality and help resolve problems within their teams. After each three-week-long development sprint, every team runs a retrospective meeting to discuss what went well in the last sprint and what needs to be improved. A cross-team retrospective summarizes these points and addresses cross-team issues. The Stakeholders

More than 30 co-innovation customers currently work with us or have helped us discuss requirements in the past. The initial trigger to develop a standard software for early product costing was a request from a customer. After this, we built up two groups of co-innovation customers. One group of German customers comprising 8 companies from the automotive industry as well as industrial machinery and a second group with 10 US-American companies from different industries. Since PLC has become generally available in September 2015, we produce new software releases approximately every quarter. In the same quarterly rhythm, we run regular co-innovation workshops with the German customer group. Every workshop lasts one and a half days. On day 1 we typically first give an overview of the newly available functionality and then run a usability test. For practical purposes we usually have six to seven stations where three end-users each (mostly highly experienced product controllers or IT experts) from different companies have to perform a set of pre-defined tasks. We observe these sessions and take notes of critical incidents [10] – positive or negative. In the end, each of the groups summarizes their impressions in the plenum (usually around 50 participants including customers, partners and representatives from the GTM-team and the development team). We usually conclude day 1 with a joint dinner – an ideal environment for sharing “stories from the trenches” and deepening personal relationships. On day 2 of the workshop we typically discuss requirements and jointly come up with rough design sketches. Often we use the Design Thinking methodology. Similar workshops we run with the US-American coinnovation group but, for practical reasons, only once or twice per year face-to-face, and as a shorter online meeting otherwise. METHODOLOGY

Our development methodology is a mix of established “bestpractices” such as Design Thinking, Scrum, and Test-Driven Development (see Figure 1) and several “home-grown” adjustments and optimizations, which evolved from our continuous improvement approach in the retrospective meetings.

Figure 1. We use a mix of agile methodologies Design Thinking

Design Thinking (DT) is a collaborative problem solving approach involving people with different backgrounds [9]. DT advocates the fast generation of ideas and prototypes with the goal of designing a solution that is feasible (“it can be done”), viable (“it can be sold”), and desirable (“users will love it”). In our co-innovation workshops we use a DT-like approach for eliciting requirements from end-users and for designing rough solution sketches. Every DT session starts with a design challenge, such as “How can we represent product variants within a product cost calculation?” Often, customers explain how their processes run today and mention related challenges. Then, after some silent “brain-writing” (typically 5-7min of writing ideas on sticky notes) every participant presents his ideas to the group (typically no more than 8-10 people). Others may ask, comment, and extend the ideas but not “shoot them down”. Finally, all input is grouped and prioritized and a joint prototype is developed on a white board. Also, paper, pens and any other materials may be used for developing mockups. We explicitly ask everyone to refrain from using their computers and phones during the sessions to avoid distractions. The advantage of discussing requirements, priorities, and designs with a group of customers instead of with each customer individually is that it is easier to come to agreements. When developing standard software (instead of customer-specific software), the development team must balance the requirements of all customers which almost never are the same. Compromises are inevitable, but are easier if the discussion is between customers rather than between one customer and us. After the workshop, the development team designs lowfidelity/high-fidelity UI mockups for the new functionality based on the ideas and prototypes from the workshop. These designs are usually presented at the next customer workshop and customers give final feedback before implementation starts. Typically, this requires only minor “tweaking” of the design. However, we rarely had a session where the designs were accepted without change requests.

The implementation begins after this feedback. This iterative feedback process ensures that future functionality addresses the requirements. On the downside, the time between the initial requirements analysis and the final delivery of the features is often months instead of weeks or days – one process challenge we still need to solve. Scrum & Kanban

Scrum is a framework for agile iterative product development which works best in small, self-organizing, cross-functional teams [14]. In our experience, detailed long-term planning of capacity and timing for feature development is very difficult and rarely works. This is due to the fact that most features have hidden requirements or challenges which only come to light during the technical design or even the implementation. For this reason, we follow the Scrum methodology to a large degree but not entirely. We plan for the duration of one sprint. We have experimented with different sprint durations. For us, the best balance of development time and formal meetings (planning, review, retrospectives) is three weeks. However, this is likely a consequence of our team size. Other teams within SAP use shorter or longer sprint durations. As opposed to “Scrum-by-the-book” we do not make detailed time estimations and formal commitments for every user story or backlog item. In practice, this has proven to be mostly disappointing, frustrating, and even demotivating when features did not get done in time. Instead, we combine Scrum with elements from Kanban [6]. Every user story or backlog item moves at its own pace through well-defined stages. This allows features to get done faster or take more time depending on their complexity.

upcoming release based on customer priorities, effort estimates and overall development capacity Release Kick-off:

 

First week of new development ~1.5 hour meeting with entire PLC team to communicate and discuss release plan and focus topics

Feature Planning:

 

Every 3 weeks, i.e. once every sprint GTM team and POs talk about current development status, make minor adjustments to feature priorities, discuss high-level concepts and agree on follow-up meetings (e.g., “deep-dive” design workshops with individual customers)

Story Planning:

 

Weekly ~1.5 hour meeting with all POs and architects to synchronize on current status, discuss technical challenges and capacity issues, assign user stories or backlog items to teams

Daily Scrum:

 

15 minutes within each of the four development teams Everyone says what they are working on and if he or she encountered any problems

Daily Scrum of Scrums:

 

15 minutes (just before lunch) All POs and colleagues with cross-functional responsibilities sync about status and problems

Sprint Review:

Every three weeks at the end of every sprint (except in the busy three weeks before Emergency Correction Close) Each developer presents the user story that he or she has finished in the sprint (if any) in front of the entire PLC team

This combined Scrum/Kanban approach works well for us but has the downside that development time is hardly predictable.

As part of our process, we run regular development meetings for synchronization and planning. Most meetings are done face-to-face as well as via video conferencing because not everyone is co-located.

Sprint Retrospective:

Release Prioritization:

 

4-6 weeks before next release starts ~2 hour meeting of CPO and Solution Owner (lead of GTM team) to prioritize a “candidate list” of features for the next release

Release Feature Estimation:

 

3-4 weeks before next release starts Several ~2 hour meetings where all POs and the architects discuss the details of the candidate features and estimate rough efforts in person days

Release Planning:

 

~1 week before next release starts ~2 hour meeting of GTM team, business architect and CPO to agree on the final feature priorities for the

  

Every three weeks at the end of every sprint 1 hour meeting within each team to discuss “the good, the bad, and the ugly”, and decide on follow-up actions This meeting is also done at the cross-team level, right after each team has finished its retrospective

Team Planning/Grooming:

 

Once or multiple times per sprint as decided by each team individually (teams are empowered to fully selforganize) Discuss current tasks, clean up (“groom”) team backlog

In addition, several other, smaller meetings take place within the GTM-team and between the GTM-team and the development team.

Figure 2. Life Cycle of a Feature

While most colleagues prefer designing and implementing software instead of spending time in planning meetings, we found that this set of meetings is necessary for us to stay in sync and manage the process professionally. Earlier attempts to radically reduce meetings have resulted in miscommunication, double work, and rework, which is why the process is now accepted within the team. Important for the acceptance was the fact that the process was designed and refined by the team itself instead of dictated from above. In consequence, we benefit from the opposite of the “notinvented-here” syndrome. Test Driven Development

Test Driven Development (TDD) is an evolutionary approach in which developers create tests before writing new functional application code [5]. TDD treats writing tests as part of a requirements and design activity in which a test specifies the code’s behavior. TDD is mainly used to achieve a high degree of quality especially when dealing with evolving requirements and their impact on existing functionality. Perhaps one of the biggest benefits of TDD is that a developer can modify existing code without being afraid of introducing undiscovered bugs. If done right, developers become “fearless” (in the positive sense). In our experience, most new features we add to the application also introduce new bugs. This is almost inevitable in a complex software. However, bugs are only a problem if they stay undiscovered for a long time, which is not the case with a good test coverage. We use a combination of unit tests (checking single methods/functions) and integration tests (checking interaction between components including end-to-end scenarios). Our TDD is supported by review meetings and manual acceptance tests. Every source code has to be reviewed by a peer before merging it into the overall application code unless it has been developed while pair-programming [15]. In our experience, TDD does not come naturally to most developers. Instead it has to be trained and promoted and the benefits have to be experienced personally. Despite significant efforts in this respect, many of our developers still prefer writing tests after implementing the functionality. Perhaps this is due to the perception of being faster when implementing functionality first. Our most experienced developers however “live, breathe, and enjoy” TDD.


Earlier we described how we use a combination of Scrum and Kanban to manage feature development. Each feature moves through the stages shown in Figure 2 more precisely, a feature (i.e. the smallest useful unit of functionality for a customer) is often split into several user stories. However, not all user stories move at the same pace through the process. Simple user stories may move through all stages within one or two weeks while others need the entire duration of the release development cycle. Therefore, the process shown in Figure 2 is performed many times within one release cycle – once per user story to be exact. Design

The features in the backlog are largely defined by customers within the co-innovation workshops as are the requirements and the rough design. For complex features, we also setup additional meetings with customers who are expert in a particular domain. This interaction is part of the Designphase. At the end, all features are usability-tested by our customers in the following co-innovation workshop. This usually uncovers small usability issues, which are planned for solving in the next release cycle. Before a user story can progress from one stage to the next certain quality gates have to be passed. For example, before a user story can move from “In Design” to “In Development” a Functional Specification (FS) must be written and reviewed which specifies how the feature should work from a user’s point of view. The review must be done by at least one person from the GTM team and at least one developer. The FS also serves as a communication medium for all software developers, GTM-colleagues, and documentation experts. Before we introduced this rather strict review process, we experienced many misunderstandings especially in the cooperation between the GTM-team and development team. These problems have largely disappeared. In addition, a Technical Specification (TS) must be written and reviewed which defines the technical architecture of the feature. Development

A user story is implemented based on the FS and TS. In order to be able to move a user story from “In Development” to “In Testing” a feature needs to be fully implemented and the source code reviewed by a peer. Also, precise acceptance criteria (ACs) have to be defined. These are used to validate that the feature matches the specification.

ACs are a semi-formal description of the software‘s expected behavior based on Hoare logic [4]. We started using ACs after facing problems to keep code, tests, and documentation synchronized. In this respect, ACs have become the single source of truth and we spent considerable effort to keep ACs up-to-date and sync with the implementation. Originally, we made ACs part of the “In Design” phase where they logically belong. However, we found that given a well-written FS and TS, the ACs can also be authored in parallel to the implementation which increased our development efficiency. Testing

In the Testing phase the user story is tested against the ACs. In result often small changes have to be made to the code or the ACs. For many, but not all features, automated user interface tests (AUIT) are developed. These AUIT are executed every night, which results in a daily e-mail to the team listing all passed and failed tests. If a code modification leads to a bug it will be uncovered quickly and without human effort. Similar to the AUIT we have automated performance tests, which are executed for the most critical or performancesensitive features. The performance tests use representative test data and simulate the behavior of multiple users accessing the system at the same time – an approximation of what we expect from productive use. Again, a daily report shows how the developments of the last day have positively or negatively affected performance. Figure 3 summaries the artifacts which we produce for every user story that we develop.

Figure 3. Artefacts of a user story

Before a software release is shipped to our customers, we spend three weeks of integration testing and final bug fixing. After the GTM-team and SAP-internal consultants have tested all features, our partners and some of our coinnovation customers do the same during the so-called Acceptance Testing (AT). For the AT we have used different approaches to testing - one based on detailed step-by-step instructions (reusing the ACs) and the other based on much shorter, more general instructions. While detailed instructions have the benefit to leave no doubt about what and how needs to be tested, they require high effort for creation and maintenance. Most testers do not like them because they are tedious. For this reason we now opt for a

more “exploratory testing” approach based on very short descriptions. Apart from being “more fun”, they also uncover unexpected issues as the testers test in unpredictable ways. Automation and Virtualization

We continuously strive to automate technical processes. Aside from AUIT and automated performance tests we also have a highly automated build process. We use gated-checkins so that all code is tested against all unit tests and integration tests before it is merged into the main development code line. Our development systems are fully virtualized allowing us to quickly create new virtual machines when needed or to dynamically adjust resources (e.g. available main memory or number of processor cores). This level of automation and virtualization has significantly improved our efficiency over time. Maintaining and increasing the level of automation comes at a cost of one full time expert for our team. LESSONS LEARNED (AND LEARNING)

During the last three years, we have learned many lessons. Perhaps, most importantly we learned that there is nothing like the "one-size-fits-all” process. While our development team grew from only a handful of developers to four fully staffed teams-of-ten we had to constantly adjust our processes mixing established “best practices” like Scrum with our own optimizations. We have done both, under-managed and over-managed our development. When we started, we felt that we lived agile development by not doing much planning at all. This did not scale beyond the number of people that fit into a small room. Later, we increased our meeting count to a degree that the perceived time remaining for development was smaller than that used for communication. Therefore, we changed our meeting schedule once again, to a more sustainable level. Communication and synchronization, once being the source of most pain, is running smoothly now. We also found that continuous learning is key – and takes time. When junior colleagues joined the team, we spent much effort for knowledge-transfer through self-paced online courses, class-room trainings, mentoring, and learning by doing (e.g. fixing simple bugs). In hindsight this time was well invested – and well underestimated in the beginning. Usually it takes many months and even years before a junior software developer reaches the level of productivity of a senior developer. The difference can easily be 200-500%. Effort estimation proved to be another challenge, which took much time to get under control. Especially in the beginning of our development, effort estimates often were off the mark by factor 2 or 3. We learned that a good effort estimate requires a detailed design. Also, we now routinely multiply our estimates for net development days with a factor that covers design, tests and the unexpected. All estimations are done jointly by the product owners (senior developers) which further improves the accuracy. Our focus on co-innovation with customers proved to be

highly successful. All of our developers meet with and talk to customers personally. Not everyone all the time but everyone sooner or later. This we found to be highly beneficial not only for better understanding but also for feeling customer needs. We routinely receive highest marks when asking our customers about their level of trust and satisfaction in collaborating with us. We feel the same respect for them, which makes work not only more successful but also more fun. Another source of joy and motivation have been our release parties within the team – mostly get-togethers with beer and fast food, sometimes a little more. While the business benefit can hardly be put in numbers, the smiling faces of our colleagues are likely to be worth it.

We continue to learn and adapt as we believe that software development practices still have a long way to go. ACKNOWLEDGMENTS

We thank our customers for their trust and their insight and the PLC team for being the great team that it is. Finally, a special thanks to Theo Dirk Meijler for contributing his thoughts and experiences. REFERENCES


[2] [3]

However, we are not done learning. Managing customer expectations is an ongoing challenge for us. Not unexpectedly, customers come up with an amount of new ideas and feature requests, which hardly can be handled by the development team due to capacity restrictions for each release cycle. As a result, the number of backlog items is increasing rapidly with the number of customers and partners joining the co-innovation approach. Though it is an obvious strength to have a broad bandwidth of co-innovation customers representing an adequate market sample, it has become increasingly difficult to satisfy every individual customer. We have tried transparent methods of collaborative feature prioritization but are not done yet improving those. Over time we have learned to appreciate the ever growing number of automated tests. However, we also have to live with the ever increasing time for code check-ins which in effect limits the iteration speed of every developer – poison for fast development. Here more work is required to shorten build/test times through parallelization or yet-to-bediscovered approaches. Last but not least, we keep questioning ourselves about all the artifacts that we produce that a customer never sees or pays for (at least not directly) – like FS, TS, AC, AUIT etc. Also, we have not yet found the silver bullet for keeping specification and software in sync with minimal effort.



[6] [7]



[10] [11]


To find the right balance is likely to remain a challenge for the future. CONCLUSIONS

We shared our experiences and lessons learned in living an agile processes for developing business software. When we started development, our process was not as comprehensive as described in this paper. Rather, we went through a process of trial and error, adopting and mixing best practices with our own optimizations. While we do not claim that our process is best practice we have received much recognition within our company as well as from our customers and partners, particularly about our co-innovation methodology.




Berczuk, S. 2007. Back to basics: The role of agile principles in success with an distributed scrum team. Agile Conference (AGILE), 2007 (2007), 382–388. Cohen, D. et al. 2003. Agile software development. DACS SOAR Report. 11, (2003). Highsmith, J. and Cockburn, A. 2001. Agile software development: The business of innovation. Computer. 34, 9 (Sep. 2001), 120–127. Hoare, C.A.R. 1969. An Axiomatic Basis for Computer Programming. Commun. ACM. 12, 10 (Oct. 1969), 576–580. Janzen, D. and Saiedian, H. 2005. Test-Driven Development Concepts, Taxonomy, and Future Direction. Computer. 38, 9 (2005), 43–50. Kniberg, H. and Skarin, M. 2010. Kanban and Scrummaking the most of both. Lulu. com. Leinonen, T. and Durall-Gazulla, E. 2014. Design thinking and collaborative learning. Comunicar. 21, 42 (2014), 107–116. Lindstrom, L. and Jeffries, R. 2004. Extreme Programming and Agile Software Development Methodologies. Information Systems Management. 21, 3 (2004), 41–52. Meinel, C. and Leifer, L. 2010. Design thinking research. Design thinking: understand–improve– apply. Springer, Heidelberg. (2010). Morgan, D.L. 1996. Focus groups as qualitative research. Sage publications. 8–17. Neale, M.R. and Corkindale, D.R. 1998. Codeveloping products: Involving customers earlier and more deeply. Long Range Planning. 31, 3 (1998), 418–425. Paetsch, F. et al. 2003. Requirements Engineering and Agile Software Development. Proceedings of the Twelfth International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (Washington, DC, USA, 2003), 308–. Razzouk, R. and Shute, V. 2012. What Is Design Thinking and Why Is It Important? Review of Educational Research. 82, 3 (Sep. 2012), 330–348. Schwaber, K. 1997. SCRUM Development Process. Business Object Design and Implementation: OOPSLA ’95 Workshop Proceedings 16 October 1995, Austin, Texas. J. Sutherland et al., eds. Springer London. 117–134. Williams, L. et al. 2000. Strengthening the Case for Pair-Programming. IEEE software. 17, 4 (2000), 19.

Suggest Documents