point
Menu
Magazines
Browse by year:
Software estimation for enterprise implementation
Rajib Chatterjee
Wednesday, October 31, 2007
About a decade back I was attending a workshop on Function Point (FP) estimation technique. Being a senior developer I could understand almost nothing from that workshop, nor was I trying to understand. After the session we were given a case study of a small Sales Inventory application, and we were asked to estimate the same for Effort and Schedule – good that the faculty did not ask for estimating the size though that was the key learning from the session. After some deliberation I decided to use my guts to get a figure both for Effort and Schedule, and guess the result … my educated guess (guesstimation) was 99 percent accurate! Well, that’s what it (Estimation) was in those days. We had plenty of techniques to do software estimation and few of us diligently followed quite a few of them, while most of us used our guts to get a number and then used any of the popular techniques to match the guesstimated numbers (I call this as back-calculated). Moreover, the reviewers also followed the same techniques while reviewing the estimation.

If we dig this issue further we can’t blame those who often use their guesstimation powers to get the number (with +/- 30 percent variance) and I think the main reason is only one. In the last ten years industry really did not spend time and energy to innovate an accurate estimation model or framework. At the same time, IT companies were never challenged by their customers for the quoted cost & schedule. This was primarily because of huge dollar savings through outsourcing implementation work to low-cost countries like India, Mexico, and China. The customer never felt the need to be educated in this subject. During the 1990’s the customer was even ready to outsource work in time and money model (T&M model). Unfortunately, with T&M model IT companies never felt the need for an accurate estimation model as at the end of the day customers were being charged by the offshore headcount and not on the size of the work.

However, during the late 1990’s and the opening years of the present decade, customers felt that they were paying more with T&M model and enforced the Fixed Price Model for any software implementation work outsourced to offshore or near shore. That’s when industry was faced with the challenge of estimating accurately during proposal stage and software estimation became one of the most critical tasks as part of engineering lifecycle.
Having said this, I think industry was a bit late to recognize the need for an accurate model for estimation. Since, by then the nature of outsourced work had changed. It was a paradigm shift from the low-value coding & bug fixing work to high value enterprise implementation for full engineering lifecycle. While the customer started expecting more and more in less cost, IT companies struggled with cost overrun and poor quality. At the same time, industry leaders like SUN, IBM, and Microsoft challenged the Legacy Applications by coming up with higher-end servers and PCs and arguably few robust enterprise frameworks for delivering application much faster and customers started showing interest to migrate their existing mainframe applications to modern technology frameworks using Enterprise Architecture (EA). Many implementation frameworks evolved — like Enterprise Unified Process (EUP), a modified version of Rational Unified Process (RUP), Zachman Framework, and TOGAF, to name a few — though the perfect usage of these frameworks is still a challenge.

I think the key constraints to accurately predict costs, timelines, and schedule on software projects on Enterprise Implementation include:

1. The increasing complexity and segmentation of technologies and skills:
Let’s see how many different operating systems we have nowadays. I count at least 12+ for 10+ different platforms (each mobile device is now coming up with its own OS). Let’s also find out how many different programming languages we have. 100+? Besides we have a lot of libraries, frameworks, middleware, open source etc. With all these things available right on demand, customer’s expectation is touching the sky. As somebody from IBM Rational said, the customer assumes that developing software application is like Play-doh game which has no specific rules and we can do anything (unlike cubes which has specific rules) and customers keep changing their needs in mid-stream.

2. A growing “productivity gap” between the best and worst software professionals:
As I pointed out earlier, after the low-cost countries created their market space in IT industry, the cost has remained the key driver and not the quality. While more and more work started pouring in, there was an acute shortage between demand and supply, and hence the IT companies had no other option but to alleviate their recruitment process a bit. As a result, nowadays, the productivity ratio is almost 20:1 for countries like India. In other words, the best staff can do as much as 20 times the useful work of the least productive staff. Hence, as the software projects become more and more complex, past project results (we call this as Productivity data for various technology) no longer provide an indication for new estimation.

3. Ambiguity about what estimation technique to use:
It is difficult to decide upon an estimation technique and people end up using the guts to get a number as I said earlier. The problem with guts-based estimation methodology is that everybody pretends to become a subject matter expert and if that’s not the case then the project will be a disaster. For example, a customer wants a tailor-made CRM application and if the estimator wants to use his or her guts then he or she needs to be an expert in business architecture for CRM modules, should have good understanding on third level SOA implementation for an enterprise architecture, and also know his or her team’s productivity and other environmental factors for similar application to give a “finger on the air” estimation with +/- 20 percent variance (Please note that +/- 20 percent is no longer accepted since 20% variance will eventually bring in 50% rework).

4. The potential for misalignment between estimation techniques and execution methodologies:
I have seen this many times in my last 14+ years of IT career. You pick up Function Point Estimation methodology since your organization recommends you to use it as the only methodology for estimating size, and you’re confused since it’s an enterprise architecture implementation and you have decided to use Iterative or Agile methodology to develop the application.

You are in a dilemma since you either end up estimating very high or very low and then you once again decide to go by your guts (and keep the internal and external auditors happy by back-calculating the numbers using Function Points!).

Keeping all these constraints in mind I certainly do not recommend any single estimation methodology. With my experience of working in large multi-vendor enterprise applications, I suggest a combination of estimation methodologies to mitigate the risks related to estimation. These are tried and tested in my projects and I am sure others may have applied the same technique too.

The author is Group Project Manager, iGATE Global – Chennai. He can be reached at rajib.chatterjee@igate.com
Twitter
Share on LinkedIn
facebook