Software Estimation for Enterprise Implementation

Date:   Friday , November 30, 2007

There is nothing new in what I am writing here, but I will try to simplify the estimation process so that more and more estimators find use for this process. The first two steps use Top-Down approach, whereas the last one uses the Bottom-up approach. It may be noted that the steps are not interchangeable and we need to strictly follow all the steps to mitigate risks. While step 1 is to be performed only once during Inception or Pre-Inception stage, steps 2 and 3 need to be followed before and during each Iteration. A summary of all the three steps is given here.

Step 1: During the Inception stage (the stage where Business Level Use Cases are available in certain granularity with approved business architecture) we can use any of the popular estimation techniques to get a ballpark estimation with +/- 20 percent variance. I would rather go with Use Case based estimation to get a high level size (measured in UCPs) and apply standard productivity to get an Effort. Some of us may even prefer to go for Function Points based estimation provided that we have some inputs on the data points and key transactions. These are much discussed about topics and I don’t want to waste time discussing these here.

As one can envisage, we have a set of rules for the three steps - Effort from step 2 must be within the range of step 1, and if it is not so, we need to revisit step 1. Similarly, estimated Effort for step 3 must be within the range of step 2, if not we need to revisit step 2. So, as we proceed with these steps we are gradually reducing the variances and increasing the accuracy level.

Step 2: This is to be performed just before starting the Iteration when we have all Use Cases realized into analysis model and a high-level design model is also available. I here recommend the usage of Slightly Modified Complexity model (SMC estimation methodology). Given below are a few brief snapshots explaining the model.

Define the Complexity – Depending on the challenges in your Enterprise Architecture Implementation you need to redefine the same. I have considered five different complexities (viz. Very Simple, Simple, Moderate, Complex, and Very Complex) for four different types of transaction (GUI, Logic, Database, and Output) and you may wish to follow the same. The methodology is very similar to function-point based estimation but it must be borne in mind that the levels are different. Once done, you should provide some guidelines for choosing a specific complexity, for example Simple-GUI should have 1-3 IO Controls, less than 2 Drop Down etc. or a Complex-Logic should have I/O XML/XSL <=3, Number of transactional fields > 12, scope for writing a “Submit” action or writing an “Initialize” action.

Now, having defined the complexity sit with your Analyst and Designers to break down the Use Cases in various transactional elements and map them to a transaction type. Typical transactional elements could be Service Implementation, Message Generation, Process Layer Implementation; DB updates (DAO), Bridge Classes, Action Classes, and Value Objects (VO). Your designer should be able to guide you to do this technical breakdown. Make sure that all transaction elements are mapped to complexity. Service Implementation may be mapped to simple or moderate complexity under “Logic” and DAO could be mapped to either Simple or Very Simple complexity under “Database”.

The rest of the process is pretty same as the other estimation methodologies. The real challenge is to have an ability to visualize the elements and map them in respective transaction types. This model gives us a size in Complexity Point and, similar to FP, we can apply productivity figure to get the Effort in Man Months.

After multiplying with Adjusting Factors we can get a size with +/- 10 percent accuracy. For those who are new, an Adjusting Factor is the degree of influence on overall size, example Data Communication, and Performance.

Now you need to decide if you are really keen to wait for step 3 to give a quote to the customer, since in most cases the customer will not wait till you complete step 3. You can even get +/- 5 percent accuracy if this is done carefully and the Analyst and the Designer have good knowledge on the technical complexity. So, I suggest that you go ahead and communicate to the customer with a few caveats.

Step 3: Once you have the detailed design available, you can use
modified work-breakdown-structure (WBS), and using the PERT/CPM formula [(a+4m+b)/6] you can arrive at Effort for each task or line item. I have used MS Project as a tool to create a simple template, which can be used to automate approximately 60 percent of this process.

As stated in the beginning, Enterprise Architecture implementation is well complemented by some beautiful process frameworks like RUP, EUP, TOGAF, and Zachman, and all these have demonstrated their ability to simplify the enterprise implementation. Let’s assume that you have used any one of the above process frameworks and have already prepared a high level WBS. Organize a workshop with key developers, designers, and analysts to break down each of the tasks to its atomic level. Once you’re satisfied with the level of granularity, the team can start going through each task and get the Effort using PERT/CPM method. In the template shown below you can see that “M = Most likely” value is getting derived automatically based on the complexity and type of work. Once the template gives the M value, as an estimator, you should also collect “a = Most Pessimistic” value and “b = Most Optimistic” value from the team. Please keep in the mind that “M” value should be between “a” and “b” (Either a
If the WBS is in atomic level (i.e., if Effort for each task is less than 10 hours) then this template can give +/- 99 percent accuracy. The entire process is a time consuming one and it may take up to a week for bigger projects. But your effort will be worthwhile. Firstly, this exercise gives your team the most accurate estimated Effort. Secondly, the team is now fully aware of the scope, and last but not the least, the same work-break-down MPP file can be used for Iteration planning. You just need to hide these User-defined columns and bring in the conventional columns like Work, Duration, Start, End, and Resource names.
Repeat step 2 and step 3 for all Iterations. Also please make sure that you revisit step 2 (SMC Estimation) after completing Iteration to validate the numbers, and revisit the productivity figures also for next Iteration.

These steps are not a thumb-rule or hypothesis but an effort from my side to initiate the thought process toward defining a most accurate estimation model. I would be glad if more and more Managers start thinking in this direction and share best practices.

The author is Group Project Manager,iGate Global-Chennai. He can be reached at rajib.chatterjee@igate.com