Written by: Al Hilwa, IDC Program Director, Application Development Software
Software development is exploding. As the world economy transforms to more digital modes of operation, the skill and competence to build custom applications is becoming table stakes in this innovation-centric world. At the same time, the volume, complexity, and importance of custom software has continued to escalate, placing an ever greater burden on software developers to get things right. Yet, the innately human activity of building software systems is fraught with peril and few applications hit the mark when they are launched and even fewer survive a year's anniversary. Meantime, an endless, well-decorated wall of shame of public software project and product failures extends as far back in history as the eye can see. We are tempted to ask: why do software projects fail?
Software and product failure is a broad, multi-faceted and complicated area of exploration. As part of the Developer Insights Report: A Global Survey of Today's Developer, 48% of developers identified “changing and/or poorly documented requirements” as the top reason for project failure. It is instructive that this was selected over “under-resourcing” which came in at number two with 8% fewer selections. Rounding out the top four reasons cited were “poor team management” and “insufficient time allocated to testing”.
That requirement definition, or identifying the problem to solve for, is one of the thorniest aspects of building software is not surprising and underlies modern techniques in software development. But, let’s back up a bit.
Software engineering is a very young discipline whose basic tenets did not begin to be established until the 1980's. The fast pace of technology evolution in computing, a phenomenon we now broadly refer to as Moore's Law, has also meant that few projects are ever repeated with comparable technologies or under comparable circumstances. The ever accelerating velocity of business evolution only compounds this problem. Starting in the mid 1990's, software engineering has been characterized by a shift in the approaches taken to manage large projects to what is now known as agile development methodologies. These methodologies, in specific popular incarnations such as Scrum or Extreme Programming, are focused on rapid iteration in order to integrate inevitably changing requirements and focus.
An early criticism of agile approaches was that it avoided the problem of poorly defined requirements by simply not seeking it. In other words, by integrating the target user into the software development team, validating intermediate and partial versions of the software solution with users often, and iterating rapidly with software changes to adapt to user feedback, the entire requirements definition phase can be avoided. These criticisms are legitimate because in many projects a higher degree of analysis is required up-front to understand user requirements more thoroughly. A system may have a diversity of users which are too numerous to represent in an extended agile team. Users often have difficulty articulating their genuine feedback to partially built systems outside of real-life usage scenarios. Prior project failures should be adequately understood and post-mortems reviewed in order to avoid repeating the same mistakes. In short, agile techniques on their own often require augmentation and adjustment. The importance of integrated testing and quality control emerged somewhat later in software engineering. Today, a variety of tools and techniques both up-stream (e.g. requirements definition) and down-stream (testing and quality control) are now integrated into most agile approaches to tackle the central problem of project failure.
Additionally, it is important not to ignore the role of software architecture and team organization in creating flexible systems that can be adjusted rapidly and evolved to meet changing requirements. A long running software principle, known as divide-and-conquer, calls for problem decomposition to where the individual components become more tractable and manageable. Today we call this a microservices architecture and it is increasingly being adopted along with API approaches to define component boundaries. New focus on keeping sizes of teams working on a single software component small (e.g. the 2-pizza rule: a team should be no larger than can be fed by two pizzas) is also an important organizational improvement and has done a great deal to limit broad organizational dysfunction in software development. In fact, in our survey, 60% of developers reported that they work in teams of 5 or less people.
Project failure is expensive. Sunk costs in a failed project add to the enormous missed opportunity cost of a successful project. Furthermore, these costs escalate towards the end of the project because failure is invariably preceded by failed attempts to avert it. These attempts come at late points in the project where fixes are more complex, time-consuming and expensive than adjustments made at an earlier stage. While it is hard to argue against the high costs of failure, it is sometimes forgotten that success also requires costly up-front investment. Even with the best methodologies, software architectures and organizational approaches, software developers would do well to invest more time and effort into understanding the requirements of the systems they plan to build, and investing more time and effort in the infrastructure and processes to test these systems.
IDC Program Director, Application Development Software
Al Hilwa serves as Program Director for IDC's Application Development Software research. In this role, Mr. Hilwa provides thought leadership, expert opinion, analysis, research and competitive intelligence on all issues related to Application Development technologies, processes and audiences. Mr. Hilwa also offers technology and vendor advice to, investment and technology firms as well as Global 2000 end-user companies that subscribe to IDC's services.