Priority Based Non-Scheduling

Posted by Andy Singleton on October 29, 2007 08:02:00 AM
Software project estimating is hard, and I would argue that in many cases, it isn't worth the trouble.  It SEEMS useful to have an estimate, but often it does not make any difference in what you actually do. If you know the value and priority of your tasks, and you always work on the highest value task that remains, then the work that you actually do is not affected by the estimates that you have made, right or wrong or absent.  I dub this discipline of working on the most important things first Priority Based Non-Scheduling.

People often ask me how long it will take to build and release a particular new product. That’s a hard question. Fortunately, it has an easy, and correct, answer. I estimate to how much time and money this particular product release is worth, and I use that budget, regardless of the scale of the tasks involved. A more insightful answer is possible (see below), but it usually won't help you, so your insight is better spent somewhere else.

The "what it's worth" approach works well for new products because:

* It’s the right answer. Whoever is funding the project is going to give you that much time and money. That is what actually determines how long the project will take, and how much it will cost.

* It fits well with an agile process where you want to release early, release often, and get user feedback. You sort your roadmap so the most important stuff comes first, and you do releases until you run out of money or win.

But... I need the schedule to make decisions

Joel Spolsky has a good scheduling methodology, and he makes a strong pitch that you will make better business decisions if you know how long things will take. Furthermore, Capers Jones, after studying thousands of software projects for his Software Assessments and Best Practices tome, observes that "Excellence in project management is associated with 100% of successful [large] projects." In other words, you are doomed without it. Scheduling is listed as one of the most important project management competencies.

Let's look at these two arguments separately. Spolsky observes that you need to know how long things will take, so that you can decide which thing to do. If thing A is going to take twice as long to do as thing B, then you will probably commit to thing B. I agree that in this case you should make some sort of crude estimate for A and B. Then we can compare relative effort - is one twice as much as the other - and make a decision. Spolsky's Evidence Based Scheduling will give you a more accurate estimate, assuming that the relative error in every estimate is similar. So, you do a bunch of work, but you don't change the relative estimates at all.  In other words, you can estimate small projects, but it doesn't help you.

The weakness in arguing from the statistics in the Software Assessments and Best Practices book comes from the book itself.  In the large project category (10,000+  function points), projects are more likely to fail than to succeed. In other words, if you put yourself in a position where you need accurate schedules, you will probably fail.

The best defense for large projects is to divide them in to smaller pieces, thereby removing them from the large project danger zone.  In fact, this is exactly the recommendation of agile development, which divides projects into short iterations.  You would use Priority Based Non-Scheduling to decide what goes in the next iteration.  You can use Evidence Based Scheduling inside the iteration, but it won't change your plan much.

The Spolsky methodology doesn't help you in the case of these big projects.  It depends on mapping current team members to tasks that those team members will do.  So, it works if your team is stable for the life of the project.  This assumption will usually hold true for projects up to 6 months long.  It also assumes that you know what you want to build, and can do detailed task breakdowns for the entire project.  So, you can apply  this methodology for medium-sized projects with well-defined output (basically, version 2+ of medium sized projects), or you can apply it to smaller, defined, agile iterations.

For big projects that will take more than 6 months to do - in Capers Jones' description, projects with more than a few thousand function points - you can get an estimate by counting the number of function points.  There is a pretty consistent relationship between the number of function points, and the time required to complete a project, and the risk of overruns.  This type of estimate has the advantage that it is not sensitive to changes in the team.  It's also relatively insensitive to innovation - changes in direction.  You may end up doing completely different tasks, but you end up in a similar range of complexity.

So, we have a hierarchy of estimating situations:
  1. You need to get a small thing done and released.  The estimate makes no difference because you will implement things in order of importance regardless of the estimate.

  2. You have a series of small projects and you need to determine their relative difficulty.  A crude estimate will be adequate, but it will get more adequate if you do some task breakdowns, maybe even Evidence Based Scheduling.  Then you do Priority Based Non-Scheduling.

  3. You have a mid-sized projects with a known functionality or workflow and a stable team.  In this case, you use something like Evidence Based Scheduling, and you get some benefit from it.  The scheduling basically tells you how much money you need, and when you need to pull the plug and move to a priority stack.

  4. You have a project that involves some innovation, or a new product, or a startup.  Scheduling isn't going to help you because if you are smart, you aren't sure what tasks you will implement past the next iteration.  However, Priority Based Non-Scheduling will help you a lot.

  5. You have a big project with a timeline that approaches or exceeds the half-life of your team.  You can get a rough budget and estimate by doing function point analysis.  That still leaves you in the position of projects that succeed less than 50% of the time.  Then you have to divide it into iterations and do Priority Based Non-Scheduling to figure out what goes into the iterations.

If you must...

Some projects are irreducibly complex. All of the projected features have to be delivered before you can use the result. Usually, this happens because you are replicating an existing product or process, and people will refuse to switch to the new version until they have everything from the old version.  For example, before you can release Windows Vista, you have to put in everything that Windows XP does.  If you are stuck with one of these cursed projects that must be estimated in its entirety, you can indeed learn something from Joel on scheduling.

Here is the version I use with my guys:  Estimates on tasks that you write down and understand are usually pretty accurate. However, all software projects take longer than their initial estimates, because you forget a lot of tasks. The overrun represents tasks that you neglected to include in your initial estimate.

So it is important to break an estimate down into small tasks. The more tasks you write down, the better your estimate will be. Why? Because as you break each task down into smaller tasks, you think of more tasks to do. I hate Microsoft project because it’s scheduling method is ridiculous, but people find it useful for estimating, since it encourages you to expand your outline into lower level subtasks.

Practically, this means that you have to write down all of your deliverables, so the estimate becomes a sort of specification.  At at that stage, you can use your developer estimates, or time per function point, and you should get similar results.  However, if you are prototyping and testing, you may not know the specification.

So ask yourself if making an estimate gives you any advantages. It SEEMS useful to have an estimate, but often it does not make any difference in what you actually do. If you know the value and priority of your tasks, and you always work on the highest value task that remains, then the work that you actually do is not affected by the estimates that you have made, right or wrong or absent.

Get started today with a 14–day FREE trial.

No obligations, no credit card required.

Get Started Now

About the author

Andy SingletonWorking on Continuous Agile and Accelerating Innovation, Assembla CEO and startup founder

Get updates about development, productivity, and teamwork