Tuesday, October 13, 2009

The feature -> task conversion

The inspiration for this post came from a question to my previous post Kanban vs. JIRA, regarding the problem of how to use JIRA to model Kanbans pull based approach to tasks handling. I will try to describe my view on how to address this problem, which isn't a so much a issue with JIRA, but a fundamental task management challenge.

The root of the problem lies in the conversion of the customer/product owners input requirements/features, into a operational set of tasks which can efficiently be processed by the team, and lead to a system with the functionality needed by the customer. This problem manifests itself very different in a Scrum and Kanban oriented project.

In a Scrum project the activity of processing the features contained in the backlog, into processable tasks isn't really addressed. There is a lot of focus on defining and prioritizing features, which is a very important activity, but this doesn't in itself lead to a effective task pipeline (push oriented) or task pool (pull oriented). This isn't necessarily a problem as this conversion activity can just be added to the development process. It can on the other hand turn into a root cause for a lot of problems such as breaking sprints, inablity to finish tasks satisfatory, inferior quality etc, if ignored. This is exactly the case for most of the Scrum projects (and development projects in general) I have encountered, that is, the tasks used to manage the sprints doesn't efficiently reflect all the software development aspects needed to deliver a finished system.

In Kanban the input to the process is a backlog of directly processable tasks, and Kanban therefore requires some kind of mechanism for 'feeding' this task pool. The major benefit of Kanbans approach to task processing is that the requirement of only working with well defined, operational tasks become glaring obvious, and a failure to achieve this will quickly break the Kanban process, because of the teams inablitity to finish tasks. The problem of ensuring that the operational tasks used in the daily work reflects the full set of work need to complete the required system remains, and in must be addressed through activities outside of the Kanban process.

A very good tool in helping with the development and management of the task backlog/pool is defining a number of attributes and task relations which can be use to qualify tasks before thay are added to the 'processable' task pool. These task attributes can also be used by the team 'pulling' the tasks from the backlog to determine which tasks would be the best to work on at a given point in time. The problem is that the task model can become quite complexed as the number of attributes and tasks grows and a good issuetracker, like JIRA, is therefore very useful in managing the model, ensuring consistency and allowing focused views on the aspects of the task model need in a given situation.

Note that I'm not talking about a simple project management tool here, as the tool needs to  include all aspects of the feature -> task lifecycle. Because of this, I will referrer to an issue pool which is the full set of features, bugs, actions, tasks, etc. The task pool, or backlog is the 'processable' task view we can use when accessing the issue model from a task management role. This is relevant, when we work in a Scrum, Kanban or other task oriented context.

Here is my list (very verbose) list of issue attributes I find helpful in nurturing a good issue model together with the relevant JIRA field used for storing the information
  • Summery: This is the 'human readable' key to the issue. A precise and relatively distinct summery is essential in being able to generate efficient issue and task overviews as found in the Scrum and Kanban oriented cardviews, or different tool filters. The summery should be updated as the scope and content of an issue becomes clearer or changes.
  • Detailed description(Description): The full description of the issue. Will properly start out empty and grow as the content of the issue becomes clearer. The ability to include rich content in the issue description is impotent to avoid the need to distribute the issue information over more than one tool.
  • Importance(Priority): Here I prefer a very course grain scale, like the 5 levels found in JIRA. This is partly to allow the team performing the work to have some freedom in choosing which task to work on next (pull task), partly because other issue attributes will contribute other dimensions to the issue model.
  • Iteration(Fixed version): All projects have some kind of time partitioning, like version, increments, sprints, releases, deployment etc. I prefer to use these iteration containers (together with the importance attribute) for modeling issue priority or severity, instead of a very detailed relative prioritization between all the backlog tasks, as described in Scrum. I think that customers (product owners) find this more coarse grained priority system more natural, both because the detailed priority between 2 different issues isn't necessarily a concern to the customer, but also because the iteration containers directly reflects the very tangible deployment and release milestones, which are the real concerns of the customer. It should be possible to placed issues in multiple versions with different granularity, eg. sprint/minor version/major version.
  • Estimated (remaining) time to implement (original and remaining estimate): Always a nice thing to have on a issue when considering when to start work.
  • Issue relations(Links): A major concern not addressed in an one-dimensional backlog, as found in Scrum, is the dependency of tasks on one another. The definition and maintenance of this issue model aspect is a critical input to the team choosing which task to work on. Without a good issue dependency model, it will be difficult to finish tasks, and critical paths will not be addressed.
  • Software domain(Component): An indication of which part of the system under development the task belongs to should be available (if applicable),  eg. database, GUI, business layer, test, analysis, etc.. This can be used to pull tasks according to the competences of the available team resources. 
  • Business domain(Component): Issues should be qualified according to which business domain aspects they reflect (if applicable). This can be view as a high level categorization of user stories/features/use cases.
  • References(Hyperlinks): As issues have relations to other parts of a projects information, eg. requirement, test, stakeholders, etc, this references should also be added to the issue. A very generic way of achieving this is by hyper links in the description.
  • A 2 level breakdown of the issues(Issue-sub task): This reflects the two faced nature of issues. The first (and top level) part is the initial feature oriented aspect of the issue, which is also the part interesting to the project stakeholders, like the sponsor, product owner, users, executives, etc. The second part is the task breakdown of the issue, which is the operational tasks the team can work on. Other project management systems operate with a more complicated 3 or 4 level hierarchical task model, but here I again prefer the multi dimensional model, which much are better at modeling the many aspects of the dynamics in a project.
If the issues are broken down and enriched with all the aspects listed above in a efficient issue tracker, a good foundation for a operational task pool should be available. Of course the road to a correct breakdown and growing of tasks isn't trivial, and reflect the teams increasing insight into the nature of the project. But as I mentioned in the beginning, a efficient and rich model is crucial in the development of a efficient task management system.

So how do we pull task from this multi dimensional issue model. Well, a very pragmatic way of doing this is to define a number of views/filters, which can be used as prioritized mini task pools, from which to pull tasks. An example of such a filter setup could be:
  1. First all Critical issues should be handled.
  2. Secondly all analysis task from the next sprint.
  3. All analysis task from the next version.
  4. All test specifications/acceptance criterias for the next sprint
  5. All task contained in this sprint
  6. All major issues.
  7. .........
This could be flavored with with prioritizing implementing full features, selecting task based on the competences of the free resources, task dependencies, etc.

So in conclusion there is no simple answer on how to produce a efficient task queue where there is a trivial answer about which task to work on next. Gaining insight into the multi-dimentional task model is on the other hand a crucial activity, which can lead to the success or failure of a project, depending on the projects ability to implement a working feature -> task mechanism. Here the rich modeling and filtering capabilities of a efficient issue tracker is critical, especially for more complexed projects. Still, this is secondary to the availability of a competent team, who can do the actual analysis and management work needed to do the actual feature -> task processing.

Monday, August 3, 2009

Kanban vs.JIRA

Some time again I wrote a post, SCRUM vs. JIRA, where I tried to reflect on the difference between the SCRUM and the JIRA project model. Here I argued that SCRUM was based on a number of somewhat fragile preconditions, which made it very difficult to complete sprints successfully. The JIRA model on the other hand is a more 'fundamental', methology neutral breakdown of a projects dynamic aspects, and therefore much more robust basis for a project, where higher order project management frameworks like SCRUM, XP, Unified process, etc can be added.

Now let's take a look at Kanban, which has been recently appeared as a software development framework, to address some of the challenges Scrum is facing. Many of the Scrum related concerns mentioned in the motivation for introducing Kanban corresponds to the list of fragile preconditions I have listed in the SCRUM vs. JIRA posting. Just as I tried to argue in my case for using JIRA as the project model fundation, the Kanban approach to task handling is concerned with focusing on the fundamentals of the development proces, where more ambitious proces frameworks might be constructed (See f.ex Scrum-ban). Let's try to run through the list of Kanban focus areas, and compare these to the task aspects found in JIRA.
  • Task processing: In Kanban the central concern is the efficient processing of tasks, that is the pipeline Open -> In progress -> Resolved. JIRA users will recognize this process as the default JIRA workflow, and this is exactly what JIRA basically is, a application for registering and listing issues/tasks as they move though this lifecycle. A lot of higher order concerns may of course also be modelled in JIRA, but this is optional and can grow together with the project methology as the development process matures. In fact in many cases JIRA is introduced just for this purpose, someone in a project/organisation feels a need for a simple tool for registering and listing the things that needs to be handled in a more robust and shareable manor than by using post-its, simple todo tools, mails, etc.
  • Task exposure: Just as in Scrum, the main visible artifact is the whiteboard (which is the original meaning of Kanban by the way) containing the tasks which are currently being processed. This functions as the task model 'altar' we gather around, synchronize our views of the projects and maintain the models together. This corresponds to JIRA collaborative approach to task management, where the project issues are access through a user-friendly website where all project team members can view and contribute to the model. Note: this is in opposition to many of the more conventional task management tools like MS Project, Excel, etc. where the task management is owned and maintained by a Project Manager, with the occasional showing for the rest of the team.
  • Task pulling: One of the main differences between Scrum and Kanban, is that Kanban focuses on the team choosing which tasks should be processed next (pulled), compared to Scrums empathize on the product owner defining (push) which tasks should handled first (backlog prioritizing) and handled in the near future (sprint planning). Note, this isn't necessarily in contradiction with each other, the task pull and push mechanisms can coexist on two different levels in the project, task pulling is a daily activity, where task pushing is a done on a longer term basis. The Kanban push approach is again seen in JIRA's collaborative approach to updating the status of tasks, where project members 'pull' issues from the pool of open tasks. Any partitioning or prioritization of the open issue pool is a higher order concern, and isn't necessary for maintaining a working task model.
  • Minimizing 'In progress' tasks: This isn't really addressed in JIRA, even though JIRA is a very efficient tool for monitoring this, the 'In progress' list is directly accessible from the project portal page. The problem of building a efficient way of resolving tasks in a consistent and predictable manor, is one of the core software development challenges which Kanban centers around. The mechanisms for handling issue resolution must be found in other tools and human work process, but are crucial for achieving any kind of project success, and certainly for any hope of extending the development process with any kind of more advanced project management methologies (like Scrum).
Now let's take a look at the problematic preconditions I listed in the SCRUM vs. JIRA post and see how Kanban performs compared to Scrum.
  • Stable team: The impact on a Kanban project because of a unstable team is much smaller than on a Scrum project, because a Kanban team doesn't need to concern itself with the sprints failing. The number of tasks processing may become slower, but this is not necessary a problem in Kanban, the decreased velocity can be addressed on a continual basis by adjusting the amount of work in progress. The ability to process tasks may suffer from key competences disappearing from the project in a Kanban project, and might cause difficulty in finishing any tasks in a satisfactory manor, so the Kanban flow might break if this is the case.
  • Stable sprint backlog: The stability of the backlog isn't a Kanban concern, because no predictions is made on when chucks of functionality (sprints) are finished.
  • No external dependencies: Kanban task may of course also depend on external issues, but in Kanban it it much easier to mitigate any problems arising from this, because of the lack of sprints, eg. a task can be rotated out of the 'In progress' pipeline when a 'external block' appears without much interruption to the task processing itself.
  • Clear goal: Not relevant, a clear goal is not a Kanban concern as this is a sprint/iteration concept.
  • Small, well estimated tasks: Essential for Kanban, but this is ok as this is a fundamental precondition for building any kind af predictability into a project.
  • Visual results: Not a Kanban concern, even though it might be a good idea in using this development aspect to assess the task resolution.
As it can be seen, the project requirements needed to get Kanban working efficiently are much more basic than for Scrum, and much closer to the core set of issue management concerns handled in JIRA.

The two constraints which still appear in the Kanban list are that a minimum set of team competences are required and the ability to breakdown work into small, correctly estimated tasks are need. This is pretty much the most basic capabilities you can add to the JIRA issue model, and still get added value in terms of project management. This means that a well-functioning Kanban process is a much more achievable goal than aiming for a Scrum based work process, and will make a robust platform for introducing more advanced methologies, like Scrum. If you on the other hand haven't got the basic issue management aspects addressed in Kanban under control, it will be impossible to get the planning and functionality oriented aspects of Scrum to work, and it will be hard to focus on the real problems in this case, because the symptoms in a Scrum based setup will be much more diverse, hiding the real problems causing the process to break down.

As you might have noticed, I really like Kanban's more basic approach to what lies at the core of good task management (and therefore project management), which introduces a better cause-effect mechanism into software development. This hopefully results in a much improved ability to focus on the projects root causes, compare to more complicated process frameworks.

In a JIRA vs. Kanban context, I also see the issue model found in JIRA much more recognizable in Kanban. This means a good understanding of Kanban and its relation to Scrum, could function as a nice bridge to implementing a smooth 'JIRA -> Kanban -> Scrum -> CMMI focus process' transition as the development process matures (inspired by the process complexity scale found on page 8 of Kanban vs. Scrum).

Tuesday, June 16, 2009

The cost and benefits of documentation

One of my first post on this blog was a rattle about what medias to use, when producing and consuming documentation. Recently I have been lucky enough to define what documentation tools should be used on my projects. The choice has of course fallen on a wiki based information system, more specifically Confluence, backed by JIRA for the more dynamic project information, like bugs, tasks, risks, changes ( I provide consulting services for both products ;-).

But the success in this area has made a new challenge visible, which until now has been hiding in the normal swamp of inefficient and unstructured documentation tooling.

One of the first impulses you might get after getting access to easier-to-use information system, could be to start to document everything, but this ignores one of the major shortfalls of documentation, namely producing and maintaining documentation has a cost. Even after the introduction of more efficient tools for accessing and maintaining documentation , the production and retrieval from the documentation system is far more cumbersome, than using simpler human-to-human information exchange.

On the other hand, documentation also has its benefits, but I find the costs and benefits of what documentation to produce when, are seldom handled in more than a intuitive manor. This results in very simple documentation models. Examples are:
  1. More documentation is better: Heavy process methologies like Waterfall models and typical CMMI implementations are primarily build on the assumption that the mechanism for producing better software is driving the implementation by documentation (Documentation Driven Development, DDD). The idea is that more documentation -> more order and structure. You could say that the focuse here was the benefits of documentation.
  2. All documentation is bad: This was seen in the initial XP and agile movements, which were a counter-reaction to the Waterfall model and other heavyweight methologies (DDD). The burden of documentation is the main focus here.
So in order to get a more balanced approach to when to retain a piece of information in form of documentation and when to stick to a direct person-to-person strategy, is here my list of forces to consider when choosing whether to create a piece of documentation, or not:

Benefits of documentation
  • Doesn't change over time: This is one of the primary drives for creating documentation.
  • Person independent: You do not need to have access to a particular person with the right knowledge to retrieve the documentation , you can just look it up.
  • Scalable: In a pure person-to-person approach to information sharing, you'll quickly find a small subset of project members using a great deal of their time explaining project aspects to other project members. This can be alleviated somewhat by first attempting to looking information up in the project documentation, before turning to the project oracle in the particular area of interest.
  • Geographical invariant: If a project team or stakeholders aren't all placed at the same location, the barrier to person-to-person information exchange rises significantly, thereby making documentation based information exchange more attractive. This is of course only the case if the documentation is accessible at all relevant sites, eg. properly Internet based, or at least intranet based.
  • 24/7 accessibility: Documentation based information can be access all the time. So even when project members have different working hours, vacations etc. they will still (in principle) have access to information generated during their absence.
  • Reference information: Where person-to-person information exchange usually varies according to the context it is used in, documentation based information never changes unless somebody actively updates the documentation. This makes documentation a more stable reference platform, than person based information, which has a tendency to vary more depending on who is delivering the information when. The variations in the information consumers interpretation of the information provided is of coarse another matter.
Costs of documentation
  • Needs work: Compared to human memory based information, the production requires a significant amount of work. This isn't just a resource issue, but may also remove the focus from the real problem being solved, turning the effort into documentation production instead. An example is the development af an application design. The real problem being address here is the developing an efficient application design and applying this to the application implementation. But this can with sufficiently complex design guidelines and inefficient tools turn into a struggle to fill out document templates and placed the produced documents in the right configuration managed structure. The result often being outdated design documentation, never really driving, nor reflecting the application development.
  • Outdated: One of the mentioned benefits of documentation based information, was the stability of the documentation over time. This may be a good ting in case of a static project, eg. project where the project information doesn't change over time, but projects are always changing, so the static aspect of documentation is degrades proportional to the dynamics of the contained information. The degrading is either caused by the documentation becoming outdated, or by the effort need to maintain the documentation.
  • Information retrieval: Several of the benefits mentioned above addresses availability of documentation, eg. you can always find the documentation. The problem is that this apparent quality, doesn't translated into the ability of being able to find the information needed. The information retrieval capabilities of documentation are far inferior to a human ability to interpret and answer questions. This means that the apparent qualities of universal documentation retrieval functionality will be degraded severely by the inability to quickly find the relevant information.
  • Reference interpretation: As mentioned under the documentation benefits, the static nature of document based references doesn't ensure consistent interpretation of the reference information. My claim is that it is actually often possible for an compentant human reference responsible to provide a much easier to understand explanation of how apply the reference information to the situation at hand. Eg. the design says the applcation should look like this, but how does this affect this bit of code I'm working on.
Need driven documentation

All in all there isn't any clear answers on what to document, and it is therefore very difficult to define a standard documentation structure (even though a lot of 'standard' process frameworks attempt to do this). Instead you should cultivate a more agile/intelligent approach in your organization as to what documentation should be generated.

One thing you have to keep in mind is that the (manual) production of a document doesn't contribute anything to adding knowledge to a project, the sole purpose is to retain information according to the the benefits mentioned earlier.

One way of generating just the right amount and type of documentation, is to try to avoid writing documentation before you need it. This doesn't necessary mean activity = document artifact, but that the documentation shouldn't be generated until the distribution and statical benefits of documentation based information artifact becomes apparent. A very efficient mechanism for achieving this, is to consider whether a piece of information should be found in the stored documentation, every time a person asks you a question regarding the project. If the information should be found in the documentation, refer the person to the documentation. This can have several outcomes:
  • The relevant documentation is found, everything is great.
  • The relevant documentation is found, but is obsolete. Here you would update the documentation most of the time. If you chose to not update the documentation, you should consider removed it, avoiding wasting another people time reading invalid documentation.
  • The relevant documentation exists, but isn't found. Here you should consider whether the way of finding this bit of documentation, eg. add links to information, improve searchability, switch to more powerful tools.
  • The relevant documentation doesn't exist. Consider whether the documentation should be produced at this time. Remember, on possible answer to this could be that it isn't worth the effort, I'll just answer the question.
If stick to an documentation generation approach like this, will properly only produce a minimum of documentation, but still over time generate the documentation needed to run the project. This is of course only possible in a very iterative project, very the needs of different kinds of documentation occurs on a repetitive bases, instead of deferring the usage of the documentation until right at the end.

Examples of need driven documentation

An example of this is the test specifications, these should be used continuously throughout the project, not just at an big bang acceptance test at the (apparent) final delivery at the project. Therefore test specification documentation should be produced right before or during the implementation of the functionality addressed in the tests. Using tests in this manor is the foundation for Test Driven Development, where you could says the TDD concept is actually a synergy between implementation and test, because the test specification should also be driven by implementation activities.

Another example is the production of architecture and design documentation. In the 'good old days' these where considered essential to a prober application implementation. But with the event of the more pragmatic agile approach to software development, the awareness of a more application development driven architecture and design emerged. The result is that the value of architecture and design documentation is greatly reduced, and the absence of such documentation isn't any longer considered a sure sign of an chaotic application.

Other documentation drivers
Of course you can't just ignore the more conventional drivers for generating documentation. We all start with a idea of what documentation could be useful in a project, the organization the development project is part of, usually also have some input to what should be produced. But my point here is that these inputs should all be seen in the context of the cost and benefits of documentation, and the decisions on what documentation to produce and maintain should should be based primarily on intelligent need driven ad-hoc decisions, as opposed to poorly understood upfront QA or organizational standards requirements/advices.

Friday, January 9, 2009

Efficient and fun software development the open source way

One of my constant sources of inspiration in how to work with software development, is the way successful Open Source (OS) projects structured. One obvious reason for using OS projects as inspiration, is of course the visibility of how the development works here. But other fundamental forces are at work here which makes OS projects very interesting, when considering how to build an efficient and robust platform for software development.

A interesting aspect of OS projects, it that they have to succeed under conditions which we would normally consider crippling in commercial software development. Some of these conditions are:
  • Allocated resources: One of the cornerstone in every project management model is the ability to plan (and somewhat control) which resources are available when. In OS project people do work, when they have the time and interest.
  • Control by management: In normal commercial projects you have the luxury of having roles,, which are dedicated to controlling the project. These are people like project managers, architects, QA, etc, and are backed by the company or organization. In OS projects there are no formal mechanisms for forcing people to do the 'right' thing.
  • Localized team: Most commercial teams are placed in one location, and distributed teams are usually shunned, because these teams are notorious prone to inefficiency. OS projects are usually distributed, both geographically and in working hours.
  • Sales organization: In commercial organizations, the usage of the products developed are helped on the way by a sales organization persuading customers to the virtues of the products. In OS software the sales organization is usually lacking.
Because of the absence of the listed factors in OS project a number of more fundamental qualities are more clearly visible and have to be addressed.
  • Fun: Because of the voluntary nature of participating in OS projects, an OS project needs to be 'fun' to work in, or else it will died a silent dead because nobody contributes. Many of the following qualities are derived from this.
  • Usage driven documentation: Where, the documentation in commercial projects are usually produced based on what non-users think is a good idea, OS documentation is much more driven by Just-In-Time need for documentation (here non-users are people like QA, project leaders, architects, process definers etc, which don't directly use the developed product or source). The result is, that OS documentation is usually much more relevant and updated than commercial documentation. The OS documentation production is also part of the daily work, so the infamous 'task' of documentation is much less pronounced in OS (eg. boring work isn't done i OS projects).
  • Usage by product and web quality: Where a major part of commercial products success depends on the sales organization, OS projects have to depend more on the merits on the products themselves and the quality of the (typically web based) public interfaces to the consumers of the products. The result is that the usability and visibility of the project is essential, eg. documentation, bug/feature tracking, design/source need to be of high quality.
  • Managerless: OS teams very seldom include non-software producing members, and are very efficient role vice, eg. everybody is producing actual software.
  • High level of automation: Because boring tasks aren't performed in OS projects, these are either automated if they are necessary, or left out if the need of the task is difficult to see. Mature, high quality OS projects therefore have automated many of the repetitive, boring tasks done manually in many commercial projects.
  • Self-organizing: Because projects teams in OS projects aren't defined by outside forces the OS teams usually have a much more organic approach to who-does-what. Everybody is in principle project manager, architect, developer, tester etc.
  • Distributed team: OS projects are in nature spread all over the globe, and project members typically work at different times. To handle this OS projects needs to function efficiently without the people ever meetings of talking together.
For a OS project to be a success these factors have to handled efficiently.

Conclusion: Because of the more 'fundamental' nature of OS software development, a good OS development 'model' can be used as a solid foundation for a commercial development process, where higher order methods for improving software development like SCRUM, Unified Process, CMMI, internal processes, etc. can be added. If, on the other hand, the concerns exposed in OS projects aren't handled in projects focused on higher order development models, like SCRUM, UP, etc. it will be very difficult to make the project a success. And even worse, the forces ruining the project won't be understod, making improvement impossible (my blog SCRUM vs. JIRA and following discussion elaborates a bit on this).

Disclaimer: The differences described between commercial and OS projects are of course exaggerated, many commercial project are adopting more agile approached to software development, which have many similarities to OS development. OS projects are on the other hand seeing greater influence from commercial based team setups, where the advantages of more dedicated contributers becomes available.