Estimation in the dark
I avoided it as long as possible, but the time came when the business guys on The Project wanted me to lay out a project plan for when I believed I could roll out version 1.0.
As a reminder, version 1.0 consisted of a restricted subset of functionality: essentially, the first ten requirements found here. This was filtered out from an initial set of nearly thirty use cases of functionality.
Unlike most of my domain-driven design education, I had already taken a software cost estimation course well before I returned to The Company. I thus had many different tools in the toolbox. However, the only time I had used these tools were on homework projects and exams.
This time was different. This was real. This time-table would affect many other people.
I had an idea of where to start, but it didn’t help. Take a gander at the following two excerpts from Software Measurement and Estimation: A Practical Approach by M. Carol Brennan and, my three-time professor, Linda M. Laird:
“Expert opinion usually will give you the best estimates, but it requires you to have true experts available.”
Well, I was the only expert on the project, and I was no expert.
“Estimation by analogy is the preferred approach when you have decent analogs. It will probably give you the most accurate estimation possible.”
Great, except The Project was completely new and I had no other analogs against which to compare.
If you read through the rest of the techniques in the chapter, none of them receive such glowing praise as do expert opinion and analogy. This suggested my options:
- Expert opinion
- Anything else
So I was essentially winging it in the dark. I set off to fill in the “Anything else” option. Which techniques would I use?
- First, I can use my own gut instinct as an estimate. I do tend to be on the pessimistic side when it comes to software in general, so I trust myself not to give a ridiculously optimistic schedule.
- Next, I have to try function points since they are language-agnostic and have been beaten into my brain in least five of my software engineering courses.
- Next, I decided to try object points, which seemed like a natural candidate due to the fact that I was indeed developing an object-oriented system.
- Finally, I included use case points as an estimation model to use since I already had developed use cases during my requirements phase.
All very well and good, until I started using the models and estimating.
- My personal gut feeling: 3 calendar months
- Function points with COCOMO II: 8 calendar months
- Object points: 6 calendar months
- Use case points: 8 calendar months (assuming 33% reduction thanks to a prototype and very liberal cost factors)
I did this estimation in early September. My pointy-haired boss wanted this rolled out and in use in November. In two months. He wanted a multi-user multi-location system in production use fully tested in two months by a part-time developer / full-time student who didn’t have half of the tools needed to do the job and who was relatively inexperienced.
This issue was a consistent generator of the most heated arguments that would occur between me and everyone else. I was even more incredulous when I finally learned the reason: the November/December timeframe is when The Company hits “peak season” and they want to keep roll-outs to a minimum. Oh thanks, an artificial timeline? Would you tell this to someone building a car?
Back to the numbers. What was I to do? These numbers are all over the place. After conferring with Professor Laird, I concluded that most of these models take into consideration team communication overhead and thus the actual amount of time is likely on the lower end of the scale. I used a weighted mean and decided that in 4.5 calendar months The Project’s first roll out would be completed.
Now the next task: a work-breakdown schedule. There was a high degree of uncertainty throughout the project: uncertainty in my own abilities, whether the business people really knew what they wanted, and the new (to me) technologies to be used. The professor and I decided that an agile-inspired approach would be best:
20 working days: 9/24 – 10/19: Release 0.2: Add New Equipment and Edit Equipment fully functional, tested, and validated.
15 working days: 10/22 – 11/9: Release 0.4: Login, Select Room, Switch Room fully functional, tested, validated and integrated with Release 0.2.
17 working days: 11/12 – 12/5: Release 0.6: View Building Equipment At A Glance, View Equipment Type At A Glance, View Equipment Type Details fully functional, tested, validated and integrated with Release 0.4.
20 working days: 12/6 – 1/17: Release 0.8: Refresh Dropdowns, Import Spreadsheet, and Add Equipment From External System fully functional, tested, validated and integrated with Release 0.6.
(yes, some new requirements are in here)
What did I do here? First, I divided the functionality into roughly equal-sized “sprints.” Then I took my overall time-span, 4.5 months, and first broke it down evenly between each release. Then I adjusted each one according to my perception of:
- Amount of functionality
At the end of each release iteration, The Project was to be deployable. The goal was to deliver working software at relatively short intervals due to the level of uncertainty involved. I wanted to avoid the scenario of certain units of functionality holding up overall development in case outside forces mandate a working system in some form be deployed in an emergency.
The first check-point for release 0.2 was missed, but only narrowly so due to server problems and to me getting sucked into school work, which was always a higher priority than The Project for me.
Near winter break, the business idiots began hassling me to get me to somehow agree that we could push The Project out over winter break, when I would be away in San Diego, not fully tested, nor even complete with the minimum functionality.
I fought them tooth and nail on this, and they informed me that they were unable to stall this anymore. They wanted it to be finished in time for some conference of regional managers in January. Yay, another artificial timeline. I told them, essentially, that I was going to ignore them, and they could order me to do whatever they wanted when the time came and they would be fully responsible for the mess that would result.
Well guess what? The deadline came and passed, and certain managerial and informational tasks that I was forced to delegate to those same business people were left unfinished for weeks. Why? Because The Project was never really a top priority for them, even though they said it was quite an important initiative. The only person for whom The Project was important was me.
Closer to the deadline, during release 0.8, they pressured me again into releasing prematurely. At this point, the only objection I had was full testing and the possibility that a costly-and-unprecedented data migration might have to occur if they wanted core requirements to change. I agreed to this since the chance of data migration was next to none. But again, they were unable to hold up their end of the bargain.
They eventually acquiesced to my initial early February timeline, and I in fact completed overall testing for The Project in the second week of February. I ended up being the one twiddling my thumbs waiting for them to finish tasks including but not limited to drop-down contents, user lists, and server requests.
So, if you ever find yourself estimating in the dark, I encourage you to try this approach. If it worked for me, it can work for you.