Wednesday, September 24, 2008

Robarts/Rickmeier - How Deep Should Your Pockets Be?

By Jane Robarts and Mark Rickmeier - 24 September 2008

Teams and organizations usually choose to distribute an Agile project across multiple locations for one of two reasons: cost savings or location of resources. In the latter case, organizations often have no choice but to distribute a project due to the location of delivery team members or subject matter experts, or the availability of office space. Cost savings, however, are another matter entirely. Many organizations are choosing to distribute projects overseas to development shops in Asia. Wooed by the reduced costs overseas development can offer through lower salaries and expenses, organizations often leap into distributed development expecting a margin equal to the difference in salaries of the employees.

Distributed delivery, especially captive offshore or outsourced overseas delivery, has additional costs often not considered. Identifying and quantifying these costs before deciding to distribute can be the difference between a well-budgeted and planned project and one that continuously leaks money and time.

On the surface, it may seem simple to forecast the extra costs associated with distributed delivery: budget a few trips to transition knowledge at the start and end of the project, ensure the teams all have phone lines to communicate, and recognize that people may have to adjust their schedules a bit due to time zone challenges. However, as one digs deeper, it becomes clear that there additional prices to be paid to successfully distribute a project.

Take, for example, the costs associated with travel to and from the distributed sites. An organization has undertaken a distributed project and is sending a subject matter expert (SME) to the offshore location to transfer knowledge early in the project. Flights to and from the site have been priced and accommodations budgeted. On the surface, all seems planned out. It turns out this employee needs a visa to travel. But first he needs a passport. So the organization digs into its pockets, pays for the passport and visa, and absorbs the cost of the SME having to spend two days queueing at the passport office and consulate. Now, the SME is ready to travel. Almost. There are the immunizations to be taken care of. Digging further into the organization’s pockets, immunizations are paid for and a further half day is taken off to visit the doctor.

Costs continue to mount once the SME realizes that he’ll need a computer while at the offshore site. His company-issued desktop can’t travel with him. Yet again, the pockets open up and a laptop is procured. A day of the SME’s time is spent configuring this computer for his needs.

Finally, our SME starts his voyage offshore. It’s a 36-hour trip overseas to the offshore facility. He’s travelling during the week, and the organization reaches a little further into its pockets as the SME is unavailable during this time to answer questions and productivity of the development team slows down. Despite the SME arriving at the offshore site a bit jetlagged, the visit is a huge success. Knowledge is transferred to the offshore team, rapport is built, and only once does the organization have to dig into its pockets unexpectedly when the SME realizes his cell phone won’t work overseas.

These small costs, while individually insignificant, accumulate to become a big expenditure for the organization, not only in terms of cash but also in terms of loss of the SME’s valuable time and the compounding impact this has across the team.

Beyond travel, there are other costs of distributed delivery. All projects plan a certain amount of contingency. Distributed projects need to plan for more. Events at one site can have a ripple effect on the other. A snowstorm at one site may mean half the team is out, creating a bottleneck to the other team. A power outage on the offshore side may mean developers onshore can’t get the latest code. These two independent events can have double the impact, requiring the organization to reach into its pockets to pay for the loss of productivity.

Employee loyalty is expensive. Distributed delivery, especially where it involves extreme time zone changes, can take a toll on an employee. This ‘flat world’ of software delivery still has activity regulated by daylight and normal operating hours. Distributed delivery teams have to work outside of these normal hours to collaborate with their distanced team members. It’s not uncommon for distributed teams to be burdened with early morning and late evening phone calls, emails, or video conferences. People tire. Social lives suffer. And employee loyalty is challenged. Compensating for this with perks such as extra paid leave, in-office meals, and even late-night cab rides home all add to the cost of a distributed project.

Adding up all the factors for this distributed project, the organization’s pockets had better go deeper than simply the cost of resources and travel. The little items add up and often compound, and the embedded costs escalate. Planning for all of these costs prior to embarking on a distributed project will guarantee that the true cost advantages to distributing are realized.

All of this makes distributed Agile delivery sound like an expensive venture not worth pursuing. This isn't necessarily true: cost benefits can definitely be realized. There is a point where the investment becomes worthwhile. It's just important to understand and factor in all those hidden costs when budgeting for distributed delivery. This is particularly true with distributed Agile delivery which expects changing requirements, relies on a solid business understanding by the team, and expects designs to evolve.

At what point does the investment in distributed delivery make sense? There is no set rule, no formula that can be applied. Having the right economy of scale to justify the cost of distributed development is critical to reaping the benefits of lower salaries, 24-hour development cycles, and available resources. The simple rule is: don't be blinded by the lower burn rates offered by offshore development. Be prepared to do the homework and fully understand the hidden costs of distributed development:

  • infrastructure and hardware, and more complex (and time-consuming) procurement rules for each
  • travel costs, especially last-minute emergency trips
  • cross pollination: transferring team members from each location to work for long periods with teams in other locations de-risks a project, but isn't done in zero time or for zero cost
  • additional roles: duplication of roles in each location may be necessary to ensure the right individuals are available to answer questions, guide the team, etc.
  • latency in communication: at best, work is delayed because somebody isn't instantly available; at worst, work proceeds and leads to a false positive that work is complete and requires rework to correct what was done
  • additional contingency to reflect the exponential loss of capacity when key people are unavailable or where events in one location impact progress in another
  • long distance calls, communication devices, video recorders, digital cameras, international mobile phone roaming, etc.
  • personal strain of communicating (or not communiating) via lo-fi channels that leads to frustration, hostility and even withdrawal
  • changes in personal schedules to account for timezones
  • learning the rules and procedures - all countries have different visa requirements

Longer-running Agile projects composed of multiple releases typically see the cost savings to a greater extent than short-term ones. Many of the additional costs for distributed development are one-off or early-stage events in the life of a project. Hardware investments typically occur only once. If there is a stable distributed team, knowledge transfer visits and rotations may decrease over the lifetime of the project. As Agile practices mature for the distributed team and they find their pace and understand the business, communication needs between the locations may diminish. The longer an Agile team works together on a project, the better able it will absorb startup costs and more likely it is to become efficient in work patterns, yielding the expected benefits of distributing work.

Distributed Agile delivery in most cases is a worthwhile undertaking. The benefits may not be realized as early as you might anticipate, and the costs may be higher, so your pockets may have to be deeper than initially thought. However, careful and considered planning and budgeting early on will help predict if the project is worth the distributed investment.




About Jane Robarts: Jane has over 10 years of experience in custom development of enterprise applications as a developer, analyst, project manager, and coach. She has consulted on and managed projects in India, China, the UK, the US, Australia and Canada. Jane has spent most of the last several years coaching distributed teams and working directly on distributed Agile delivery projects both onshore and offshore. She has also developed and delivered many sessions on Agile delivery to a variety of audiences.

About Mark Rickmeier: Mark has 8 years' experience as a quality assurance tester, business analyst, iteration manager, Scrum master and project manager working on large scale enterprise applications. He has extensive distributed development experience working with project teams in Australia, India, China, Canada, the UK, and the US. His industry background includes equipment leasing, retail, insurance, healthcare and retail banking. He is currently a project manager specializing in Agile transformation and project governance with a focus on metrics and measurement.

Wednesday, September 17, 2008

Pettit - Is Your Project Team "Investment Grade?"

by Ross Pettit - 17 September 2008

One of the most important indicators of risk in debt markets is the grade (or collateralized debt obligations. Despite the controversy, the rating agencies remain the authority on assessing credit quality. Their impact on AIG's efforts to raise capital this week indicates how much market influence the rating agencies have.

There are several independent companies that assess the credit quality of bonds. The bond rating gives an indication of the probability of default. Although the bond is what is rated, the rating is really a forecast of the ability of the entity behind the bond – e.g., a corporation or sovereign nation – to meet its debt service obligation.

Each rating firm uses a different and proprietary approach to assess credit quality, involving both quantitative and qualitative factors. For example, bond ratings by Moody’s Investors Service reflect long-term risk consideration, predictability of cash flow, multiple negative scenarios, and interpretation of local accounting practices. In practical terms, this means that things such as macro and micro economic factors, competitive environment, management team, and financial statements are all factors in determining the credit worthiness of a firm.

Rating agencies are subsequently able to characterise the risk of debt investments. An investment grade bond will have lower yield but offer higher safety (that is, lower probability of default). A junk bond will have higher yield but lower safety. Beween these extremes are intermediate levels of quality: a bond that is rated AA will have very high credit quality, but lower safety than a AAA bond, while a bond rated at A or BBB, while still investment grade, indicates lower credit quality than a AA bond.

This concept is portable to IT. Just as the entity behind a bond is rated, a team behind an IT assets under development can be rated for its “delivery worthiness.” The difference is that we look to the rating not as an indicator of the risk premium we demand, but as a threat (and therefore a discount) to yield we should expect from the investment.

To rate an IT team, we can look at quantitative factors, such as the raw capacity of hours to complete an estimated workload, variance in the work estimates, and so forth. But we also need to look to qualitative factors. Consider the following:

  • Are we working on clear, actionable statements of business need? Are requirements independent statements of business functionality that can be acted upon, or are they descriptions of system behaviour laden with dependences and hand-offs?
  • Are we creating technical debt? Is code quality good, characterised by a high degree of technical hygiene (is code written in a manner that it can be tested?) and an absence of code toxicity (e.g., code duplication and cyclomatic complexity?)
  • Are we working transparently? Just as local accounting practices may need to be interpreted when rating debt, we must truely understand how project status is reported. Are we managing and measuring delivery of complete business functionality (marking projects to market), or are we measuring and reporting the completion of technical tasks (marking to model) with activities that complete business functionality such as integration and functional test deferred until late in the development cycle?
  • Are we delivering frequently, and consistently translating speculative value into real asset value? In the context of rating an IT team, delivered code can be thought of synonymously with cash flow. The more consistent the cash flow, the more likely a firm will be able to service its debt.
  • Are we resilient to staff turnover? Is there a high degree of turnover in the team? Is this a “destination project” for IT staff? Is there a significant amount of situational complexity that makes the project team vulnerable to staff changes?

At first glance, this may simply look like a risk inventory, but it’s more than that. It’s an assessment of the effectiveness of decisions made to match a team with a set of circumstances to produce an asset.

There are few, if any, absolute rules to achieving a high delivery rating. For example, assigning the top talent to the most important initiative may appear to be an obvious insurance policy for guaranteeing results. But what happens if that top talent is bored to tears because the project isn't a challenge? Such a project – no matter how much assurance is given to each person that they are performing a critical job – may very well increase flight risk. If that materialises, the expectation for returns of that project will crater instantly. If it's not expected, a team can appear to change from investment grade to junk very quickly.

While the rules aren’t absolute, the principles are. An IT team developing an asset expected to yield alpha returns will generally be characterised as a destination opportunity offering competitive compensation, operating transparently with actionable requirements, maintaining capability "liquidity" and a healthy “lifestyle,” and delivering functionally complete assets frequently to reduce operational exposure. All of these are characteristics that separate a team that is investment grade from one that is junk.

While these factors are portable across projects they may not be identically weighted for every team. This doesn’t undermine the value of the rating as much as it means we need to be acutely aware of the circumstances that any team faces. This also means that assessing the delivery worthiness of a team is borne of experience, and not a formulaic or deterministic exercise. While the polar opposites of investment-grade and junk may be clear, it takes a deft hand to recognise the subtle differences between a team that is worthy of a triple-A rating and one that is worthy of a single A, and even why that distinction matters. It also requires a high degree of situational awareness – employment market dynamics, direct inspection of artifacts (review of requirements, code, software), and certification of intermediate deliverables – so that the rating factors are less conjecture and more fact. Finally, it is an exercise to be repeated constantly, as the “market factors” in which a team operates – people, requirements, technology, suppliers and so forth – change constantly. This is consistent with how the rating agencies bring benefit to the market: they are not formulaic, they spend significant effort to interpret data, and they are updated with changing market conditions.

FDIC Chairman Sheila Bair commented recently that we have to look at the people behind the mortgages to really understand the risk of mortgage-backed securities. With IT projects, we have to look at the people and the situations behind the staffing spreadsheets and project plans. IT is a people business. We can measure effectiveness based on asset yield, but we are only going to be as effective as the capability we bring to bear on the unique situation – technological, geographical, economic, and even social-political – that we face. Rating is one means by which we can do that.

Investors in financial instruments have a consistent means by which to assess the degree of risk among different credit instruments. IT has no such mechanism to offer. Just as debt investors want to know the credit worthiness of a firm, so should IT investors know the delivery worthiness of their project teams.

Especially when alpha returns are on the line.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Wednesday, September 10, 2008

Hevery - Changing Developer Behaviour, Part II

By Miško Hevery - 10 September 2008

In Part I of this series, we took a realistic look at what usually happens when we initiate change. We also took a look at the initial steps of effective change: defining a metric and getting people to accept it as a goal. In this second and final part, we'll introduce two additional steps and also highlight the point at which it becomes clear that change has taken effect.

Step 3: Make Progress Visible

When progress toward achieving a goal is highly visible, it can't be ignored:

  • It keeps the goal fresh in everyone's mind and helps to prevent regressive behaviors.
  • It communicates progress to all stakeholders, especially those who may not understand the details of the work.
  • It provides an opportunity for many small celebrations for a team on the way to achieving its goal.
  • There's a direct relationship between what developers do and changes to the visible "progress meter," providing a source of pride and "bragging rights" for individuals.
  • Regression can be easily identified, right down to the work done (the commit) and the "guilty" (the committer).

We can make progress visible by publishing both the raw metric and a burn down chart which computes the estimated date of completion based on the rate of progress.

This makes it easier to answer the obvious question, "When is it going to be done?" It also provides a fact-based response should there be a need to push back on demands, e.g., "you need to get this done by X date."

The continuous build status page is very well suited to this. With every build, the change metric can be computed and published automatically. The continuous build status page will then show a current chart of progress over time.

Going back to our example, we can use Testability Explorer to compute an overall cost number. Let's say that our project scores a cost of 500, and we make it our goal to lower the cost to 50 or below. As part of our continuous build we can produce a graph of the testability cost over time. We can project this graph in a highly visible location (on the wall, on the ceiling) so that it is on everyone's mind all the time. We can also easily identify people who are improving the situation with the code they commit. By calling out their contribution to the team, we get additional buy-in from the team.

Before long, there will be an anxious manager running around asking, "why is the graph is not falling fast enough?" When that happens, the change process is self-running and will complete itself. By keeping the graph current and visible even after the goal is reached, the change will be durable.

Step 4: Make it Required

By and large, developers care only about checking-in code. They are content to continue bad habits if those habits enable them to check in work sooner. When the priority is to get code checked-in, all those fancy graphs will simply show the team getting further away from its stated goal.

If this happens, no amount of meetings and discussion will make a difference, but one thing will: create a unit test that makes sure that any code change can only move the metric in a positive direction. This way any changes to the source code that take the team further from the goal fail the build. Once the build goes into a "red" or broken state, it needs to be fixed. This means that every developer will have to deal with the test to get the build to pass. Fixing the code to pass the build brings the team closer to its goal. A specific code change made to resolve a broken build makes the "what has to change" very clear to every person on the team.

Expect a lot of uproar, and that each developer will require individual attention to help them re-factor their code so that it passes the unit test. The screaming will stop as soon as each person has received individual attention to learn how to code correctly. This won't move the needle on the chart, but the numbers won't get any worse.

This suggests that early on, it will need to be someone's job to re-factor code. Only then will the chart move in the desired direction. Very likely, this is the only way the team will make any progress initially. This is because developers will know how to write new code, but they won't know how (or won't be motivated) to re-factor the old code to conform to new standard. Additionally, they will not yet have experienced any benefits from this new way of doing things. This typically takes a few months. While this is taking root, you will need to be patient to make slow but steady progress refactoring the existing code until developers catch up.

The Tipping Point

At some point, developers themselves will start refactoring old code and the rate of progress will accelerate. The interesting thing is that even if you - the agent of change - leave a project at this point, the goal will eventually be reached. The visibility of the charts clearly show where the team is and how much remains to achieve the goal, and this becomes ingrained into the everyday life of both developers and management. People will see the estimated date of completion and start counting the days to it. Each person will also keep the team focused on the goal. Because the estimated time of completion is based on past progress, the projected date of completion will slip if the rate of refactoring slows down. When this happens, people will ask why the date slipped. This brings attention back to the effort and reignites it. This feedback loop will exist even without a full-time change agent.

Every change has its opponents. At some point the most vociferous opponents become the most vocal champions because they experience the benefits of it first-hand. At the same time, all "passengers" in a project team find they're doing something for so long that they can't imagine working any other way. The team achieves its goal either by strong desire or because it has developed new "muscle memory."

Behaviour Changes when Goals are Visible and Reinforced

To successfully make change, it isn't enough to get consensus to try something new. You need a means by which to quantify a goal, objectively measure progress, and constantly remind people of how well they're doing toward achieving that goal. If these things aren't done, developers will find reasons and excuses not to change. This creates a downward spiral: by carrying on with business as usual, skills stagnate and technical debt accumulates. The irony is that precicely when a team most neads to make change, its own behaviors work against it. A high degree of visibility, and immediate, constant feedback, can overcome even the most difficult team situations.




About Miško Hevery: As an Agile Coach, Miško is responsible for teaching his co-workers to maintain the highest level of automated testing culture, allowing frequent releases of applications with high quality. He is very involved in Open Source community and an author of several open source projects. Recently his interest in Test Driven Developement turned into Testability Explorer with which he hopes will change the testing culture of the open source community.

Wednesday, September 3, 2008

Breidenbach - David and Goliath

by Kevin E. Breidenbach - 3 September 2008

What does the story of David and Goliath have to do with a technology journal? That will become obvious in short order, so bear with me.

I work in capital markets technology, specifically trading. Many years ago, a team I was on moved from a “waterfall” methodology to Agile. Waterfall is in quotes because, like any team that claims to follow waterfall, they really had no methodology. People will try to pass off a requirements document and a few design documents as “waterfall” despite the fact that the finished software (if there is finished software) rarely looks like the documents that were used to define it.

Anyway, without getting into the history of why that the team chose Agile, suffice it to say it was a success. So much so that it led to the adoption of Agile company-wide, with this team the reference implementation for how it should be done.

Large organization find it hard to change the way they behave. They employ a lot of people, and making sure those people are all moving in the same direction is very difficult. In other words, large organizations are not very agile. Case-in-point: consider that despite demonstrable benefits, a successful and high-profile implementation and a mass roll-out effort, Agile has still not made it to every development team after all these years. That is decidedly un-agile.

Which leads us to David and Goliath. You would have thought that by now big business would have learned at least something from that story. It’s not just a story from the Christian Bible, it’s also in the Torah; so it’s not really clear why it’s taking business so long to get it. The military did. They started using small agile teams with light but technically advanced weaponry to great success. So if they get it, why can’t big business?

Now, not every big business needs to have agile software development teams. Many firms' computer systems do not need to change for many years, so waterfall works well for them. But financial trading changes daily. Whether it’s new regulation, new products, new algorithms, the trading systems are constantly in need of update.

So while it’s good that trading teams are starting to become agile, there’s more to the problem than just using Agile as a development methodology: the organization needs to think agile. Development teams only produce software, they don’t put it in production. It needs to be deployed onto the infrastructure and administered. The larger the organization, the larger the infrastructure that needs to be maintained. Making changes to infrastructure often becomes an obstacle for development teams to get things done. There are a number of reasons for this.

Infrastructure teams are rarely exposed to Agile as a methodology. This is probably due to the fact that it has, until now, been directed mainly towards development teams. Infrastructure is often subject to rigid methodologies like CMM. Mixing CMM (or any document driven methodology) with Agile reminds me of the movie Die Hard 3, where Bruce Willis and Samuel L. Jackson watched a green and red liquid mix together just before the concoction blew half a New York block away. It isn’t going to work.

The Agile development team has consistent, time-bound iterations of, say, one or two weeks, while the infrastructure team has service level agreements that invariably are longer than one iteration length. This puts a spanner in the works. It means that the development team has to start thinking long in advance. And what if new requirements come along that change the infrastructure requirements of the application? This could mean the infrastructure team will need to go back to the requirements phase of their process, resetting the SLA clock (don’t think this doesn’t happen!) and delaying development.

This isn’t all bad for the development team, though. Because Agile is so transparent, the business will have been involved all along and will see how infrastructure impacts delivery. The result could be (and I pray it is) that the business may actually convince the infrastructure teams that they aren’t really deities, obliging them to start acting like the service they were intended to be. This doesn’t diminish the value of good infrastructure people. It simply means that infrastructure teams need to realize that some businesses, and thereby development teams, need to operate in a different fashion to others and that they should be able to accommodate this without impacting timelines or quality!

The infrastructure team is also afraid of another word: complexity. The more complex the infrastructure the more difficult and expensive it is to maintain, fear of the “c” word is understandable. Unfortunately, senior management becomes afraid of complexity and produces draconian rules that restrict development teams in their work. Ironically, the pursuit of controlling complexity can render your organization incapable of being agile.

For example, it’s fair to say that it’s good policy to keep the number of enterprise-wide tools to a minimum. For instance why support both Microsoft Word and Corel WordPerfect? The same can also be said for operating systems (why have Linux, AIX, Solaris, HP-UX and Windows) and even server hardware (one manufacturer).

But – and it's a big but – once the task of reducing complexity starts it takes on a life of its own and infiltrates every area of the technology organization. There are edicts that developers shall only use one type of IDE, or that certain open-source libraries or software languages are no longer allowed because we have other libraries or languages that do something almost similar and with some workarounds can do what you need.

This actually increases complexity of the software being produced, and reduces agility of the development team. This is because the developers now have to build those workarounds – they have to fit a square peg into a round hole in order to force some functionality out – when all they really needed was to use a different development language or different library.

Organizations justify this over-zealous complexity policy by saying that it adds cost to support these new libraries, when the fact is that no other support is needed as the libraries are embedded in an archive, or the language produces a simple executable, or the developer supports their own IDEs and tools. Indeed, the cost of the development teams building the workarounds often far exceeds any costs of supporting the tools that they intended to use. Not only that, but the workarounds may actually impact application performance costing the business money during trading!

Which brings us back to the title. David was smaller and more agile than the large and bulky Goliath, who relied on his size and strength. Not only that, but David realized, even if subconsciously, that standardizing on equipment doesn’t work when faced with an enemy like Goliath. Had he fought with just a spear or sword against Goliath then he may not have emerged victoriously and the Bible and Torah would be short of a good story! By picking up some new tools in the form of rocks and a slingshot he outsmarted the clear favorite and started the trend of people cheering for the underdog. So increasing the complexity actually had a positive affect on the task at hand!

The military’s small Special Forces teams recognize this too. They have different equipment to the regular large force, their own logistics and even command structure. It works.

Another group that has recognized this are companies that compete against large financial trading firms. Despite the economic climate, small to medium trading firms have been hiring furiously over the last 18 months while the large banks have hiring freeze after hiring freeze and lay offs on top of layoffs.

Bigger is better? I’ll let you decide. All I know is that I’d rather be is David’s shoes than Goliath’s!




About Kevin Breidenbach: Kevin has a BSc in computer science and over 15 years of development experience. He has worked primarily in finance but has taken a few brief expeditions into .com and product development. Having professionally worked with assembly languages, C++, Java and .Net he's now concentrating on dynamic languages such as Ruby and functional languages like Erlang and F#. His agile experience began about 4 years ago. Since that time, he has a serious allergic reaction to waterfall and CMM.