Yesterday, (June 9, 2010) I gave a presentation in Gothenburg (Göteborg) for a well known large Swedish company, regarding why – despite advances in software methodology, languages and tools – projects still fail.
The major theme of the speech was the paradox of at one hand we have seen a tremendous improvements of languages, tools and methodologies over the last quarter of a century , but on the other hand we haven’t seen the proportional improvement in application quality and development velocity.
Over the years I have been working in both small organizations and large organizations. Based on my own experience and observations it seems to me that the likelihood of project failure is direct proportional to the size of the organization. One explanation is, of course, that I have been particular unlucky in choosing the right kind of organization. However, my observations tend to be on par with what other old veterans admit over a beer or a cup of coffee.
The first half of the seminar describes briefly some the advances in languages, tools and methodology. That means, the affecting forces of improved software quality and development velocity. The rest of the seminar tries to analyze why, despite of the just mentioned improvements, projects still fail.
Advances in Programming
I will not spend much blog space describing what is supposed to be well known to practitioners of the field, but just highlight some of the topics from the first half.
During the 90s the software factory or process was the major influential theme. It started with the method war in the first half of the 90s, with methods like Booch, OMT, Coad, Objectory, Shlaer-Mellor. The war eventually faded out with the peace treaty of the Unified Method in the mid 90s. The Marshal plan was to first created a single notation and that became UML (Unified Method Language). After UML, the reconstruction focused on the Method towards the end of the 90s.
The software factory idea was a way too seductive idea, for people to ignore and it became embraced especially in large organizations. The essence of the idea is that you put an arbitrary group of people together and force them to work according to a specific sequence of steps and then -somehow magically- a high quality application pops out delivered on time and on budget.
Every time when that illusion eventually dissolved, the blame game started and everyone and everything became a scapegoat, except the Method.
After the millennium shift, some developers got totally fed up with the processes, procedures and policies and started to program the application from the very first moment, ignoring all formality buzz. These projects actually delivered and people started to formulate the driving forces behind it.
It was first called eXtreme Programming, but because of the word extreme do not fly well in large corporations it was changed to Agile development. During a decade of evolution of the agile principles it has emerge into methodologies like Scrum. What we now see, is a tendency that Scrum has became the new UM.
Despite the current progress of agility and scrum, one can still concentrate of the core principles, where the one and most important are the three questions. These questions focus on removing obstacles for completing and succeeding the job.
In order to support the principles, there are a set of methods, such as DRY, Testability, Refactoring, Configuration by Convention, Continuous Integration and more. I won’t discuss them here in this blog post.
The advances in technology is quit remarkable, with new dynamic programming languages based on the fusion of object-oriented programming and pragmatic functional programming and new deployment architectures with cloud computing.
The Problem is; Why despite all the advances in methodology and technology, projects still fail?
The Law of Gravity for the Consultancy Life
One source is the ongoing transformation of the consultancy market. The whole software industry is affected by this transition, even if a single company do not engage consultants. This transition, in turn, is a response to the ongoing organizational optimizations large corporations exercises in order to achieve lean-and-mean cash flow.
A single consultant is a revenue unit and we can formulate its revenue as product of its utilization and the max revenue.
The utilization is a number in [0, 1] and captures in average the net time a consultant produces income. An empirical observation of the utilization (my), is that it is an approximate constant.
Two other empirical observations state that the time and team size, is direct proportional to the likelihood that the hourly rate is the same as the market average.
This leads to we can re-formulate the consultancy equation to be a product of the utilization, the market average and the competence coefficient (kappa).
Using this, we can formulate the revenue equation for the whole team as a product of the team size, market average, utilization and the team (average) competence.
Based on the consultancy equation, we can then formulate the obvious economic in-equality as the salary plus the cost contribution must be less than the team revenue. This revenue, in turn, is a linear function of the team competence.
I refer to this equation as the Law of Gravity for the Consultancy Life. That means, in a short time frame it is impossible to adapt to changes in market fluctuations, as we have seen during the last year.
The consultancy buyers tend to devaluate, or ignore, the competence coefficient (kappa) and the seller naturally tend to overstate it.
The involved players perform different compensational strategies to lower the cost and circumvent the consultancy equation. The buyers goes elsewhere, using off-shoring to take advantage of lower salaries. The sellers, employ rookies to take advantage of of low salaries (called junior-shoring). An finally, the consultants them-self in order to maintain competence-based salary improvement, has two options to choose between; job-hopping or career change.
The Consultancy Food Chain
Over time, this transforms the whole consultancy market. Let’s look at the food chain of the consultancy market.
At the top we have the traditional consultancy company that deliver a defined result (solution) for a defined amount of money. The trick in pricing, is of course to add some margin to account for unknowns. Based on this margin, the hourly rate might seem too high for many buyers. The hourly rate is a very concrete factor, but delivered quality on the other hand a very fluffy factor. The former is imminent before the project start, but the latter is only possible to measure towards the end of the project. Therefore, buyers tend to ignore the latter and goes for companies that offer lower hourly rates.
At the bottom of the food chain, we have the new kind of consultancy brokerage firms, that can offer low hourly rates because they have zero overhead costs for non-engaged personnel (people on the bench).
Squeezed in between we have the former solutions delivering companies, that nowadays tries to survive with pure staffing business. However, in the long haul they cannot compete with pure brokerage companies.
The trend in the industry is clearly towards lower hourly rates and therefore towards brokerage consultancy companies. But how does that affect the long term software quality?
Effective Business Model
To understand that, we have to look at the effective business model of traditional and new consultancy companies. With the word effective, I mean what happens in the real and not what people say.
When the business focus is on delivering results, the sales people says “we need to sell more projects” and the technical people responds with “sure, if we work smarter and faster using modern tools and languages, we can complete projects in short time“. On the other hand, if the effective business model is on delivering hours, the sales people says “sell more hours” and the technical people responds “ok, if we introduce more bugs we can stay longer at the same project“.
Of course, nobody ever tell you this, but as a technical consultant you feel this very concrete. For example, the sales people tell you that they have sold you for 1000 hours. Then you ask what you are supposed to do and you get an answer similar to “I don’t know and I don’t care, just sit on your ass the amount of time we have sold you for and try do look busy“.
So, what is the impact of competence development for the single consultant in these two opposing business models?
Clearly, for hour delivering businesses being a better software developer is counter productive. It is considered a problem if you complete the task in half of the time you have been sold for.
In order to understand how this affect the software quality over time, we need to get familiar with two components. The first is the collective project knowledge, which is composed of both formal (written) documentation and informal (invisible) knowledge. The problems of this are the accuracy of the former and the propagation of the latter. We refer to this as project knowledge transfer. However, knowledge transfer takes time and therefore steals time and concentration from software development.
The second component is the draining vs. contribution time for a single project member. Every new member need to acquire the collective project knowledge in order to contribute to project. Before that happens, the person is draining the project instead, measured in time and cost.
The draining time is always a positive non-negative number and is affected by the amount of invisible but necessary project knowledge. However, knowledge-level, experience and professional seniority is reversely proportional to the draining time.
What happens is that for a long term project, which typically is the case for large organizations, and if the staff turnover rate is too high, the average contribution time becomes too low. If the non-contribution period starts to take a significant portion of the project time, the project is gradually loosing its collective knowledge. At the end, “nobody has a clue“.
More bang for the bucks
The sad fact is that individuals and organizations has opposing ambitions, both trying to get more bang for the bucks.
In one direction goes the technological evolution with higher abstraction levels and more involvement required and therefore higher competence required. In the opposite direction goes, the organizational evolution with standardization and policies which requires low involvement and therefore the tendency to completely ignore the competency level.
The net effect
The net effect, can we all read about in the news papers; a never ending series of failed software projects.
You can view the whole slide deck at SlideShare