Showing posts with label IT Governance. Show all posts
Showing posts with label IT Governance. Show all posts

Wednesday, May 13, 2009

Pettit - Governing IT Restructure

By Ross Pettit - 13 May 2009

In a previous article on change management, I outlined the urgent need to restructure IT. Corporate revenues have plummeted and demand will take a long time to recover. Capital structures have buckled and capital markets are on life support. Business operations are being scrutinized for inefficiency and businesses are deflating their cost structures through salary reductions, unpaid leave and RIFs. With this as a backdrop, it comes as no surprise that 91% of CEOs say they need to restructure the way their organizations work. IT, being core to business operations, faces restructure as well.

The restructuring we need to do in IT has less to do with org charts and budgets that direct effort, and more to do with behaviours that get results. Testing must be a team responsibility, automated and integrated into everybody’s daily activity. Our business partners must be continuously involved in solution development, to enable continuous change management and adaptive project management. IT Governance must be non-burdensome and non-invasive, tracking and connecting a variety of indicators. We have the experiences, examples, patterns, tools and technologies to do all of these things today, but bringing this about requires very basic change in how people get things done.

So how do we restructure? We follow the usual change management formula – figure out where we’re at, where we want to be, and how we’ll get there – but we pay less attention less to the tasks of restructuring, and more to the results we achieve.

Step 1: Define a Vision for Operations

The first thing to do is define a vision of operations. That vision needs to be explicitly spelled out. For example, part of our organizational vision might include an expectation such as: "solutions are subjected continuously to quality gatekeepers that validate technical, functional and non-functional completeness." Simple, assertive statements like this clearly communicate expectations for how work is done.

But more than just the “what” of the vision, we also need to make clear “why” we want to do these things. That means communicating the business impact we expect by doing these things. It also means we draw a line between “what we do” and “what it accomplishes.” For example, suppose we’re trying to get solutions into production faster. We can make it clear that we expect we’ll do that by reducing rework, having fewer non-value-added tasks, and having higher bandwidth communication among people in teams. We expect our operational vision will bring each of these about. For example, continuously executing quality gatekeepers will catch defects sooner and reduce rework, and also eliminate non-value-added tasks such as manual solution packaging and industrial-scale manual testing.

In expressing why we want to do these things, we must be specific: for example, as a result of this restructure, we expect we will be able to deliver new solutions to production every 20 business days, or we expect to reduce IT spend on solution delivery by 25%. The purpose of any restructure isn’t a new structure, but results, so we must use the language of results.

Step 2: Assess Current State

With a very specific vision in hand, we now assess the current state of how work is done, so we know the gap between where we’re at and where we want to be. This requires critically assessing both how work is done, as well as how it is not done. This generally involves artifact review, but more importantly involves facilitated group discussion among people in different roles - PMs, business people, developers, QA analysts and so forth - to identify practices and patterns.

This assessment is the single most important activity in a restructuring initiative. If we get this piece wrong, nothing else – not the vision, not the plan, not the monitoring – will matter, because all of our actions will be based on false data.

One of the most common mistakes organizations make in restructuring is to try to self-assess where they're at today. Self-assessments are inherently inflationary, because any type of assessment has performance ramifications. They make people uncomfortable, and can be outright confrontational. Self-assessing is usually hypersensitive to the politics of a situation, which creates a tremendous risk of distorting objectivity.

This means we should look to bring in somebody from the outside to perform the assessment. Having no personal investment to protect or political baggage to lug around will give them an independent perspective. In addition to looking critically at how things are done today, they can ascertain the viability of the vision, and recognize potential (and potentially undesirable) outcomes that may result from this restructure. But this takes the right facilitator: it calls for somebody experienced in business, management and technology who understands IT and can quickly understand a business context. The right person will not only waste little time in ramping-up, they’ll bring relevant experience into the process.

Another risk during assessment is denial that there are any real shortcomings in the way work is done. To overcome a false sense of operational excellence, start the assessment process by talking to its customers: perform a customer satisfaction survey or Net Promoter score of IT solution users. An unvarnished external opinion makes it a lot harder to gloss over operational shortcomings, and gets people focused on results (“how do we improve customer satisfaction”) instead blame (“you’re making it impossible for me to do my job.”)

Step 3: Define a Restructuring Plan

With current and future state now defined, we next determine the plan that will get us there. A behaviourally-centric reorganization is going to require a change in “muscle memory.” Change doesn't happen overnight, so we need to figure out the stages of organizational evolution that will allow new patterns of work to take root and become durable under stress. While we can look to patterns of organizational change to help us sequence the activities of our change initiative, the plan will be largely derived from experience. There will be some “low hanging fruit” as well as some significant challenges in the plan. We can get on immediately with the obvious stuff, but before we get too far down a path of execution, we need to come face to face with our competencies and deficiencies and call in outside help. For example, if we get blindsided by late-stage project failures, we shouldn't expect that our PMs can self-source a change to new project management practices. Recognizing the areas where we need expertise and engaging it at the right time of our restructuring initiative will get us through the change process with minimum blowback.

Step 4: Monitor the Restructuring

Finally, while we’re in process of restructuring, we need a way to monitor that we are making meaningful progress. Ultimately, we need to be cognizant of our results: are we closer to our goal of achiving faster time-to-market, or reduced operating costs? But those take time to materialize, and we need to scrutinize the underlying factors of our success or failure. Ongoing customer satisfaction or Net Promoter scores that we initiated during the assessment phase can help ascertain whether we're on the right track, but again, this isn't an elemental enough data point. Restructuring milestones such as “roles defined and staffed” and “reporting structures created” are insufficient, because they're task-based and not results-based. What we're looking for are ways to monitor behavioural change.

To do this, it helps to have a model, such as the Agile Maturity Model. Having been applied at a number of IT organizations in a variety of industries, the AMM allows us to consistently frame current and target organizational behaviours, map actions that will change those behaviours, and monitor how well we’re taking to those behaviours over time.

By using a framework that allows us to consistently and frequently assess how work is performed, we get a pretty good idea as to whether we're increasingly doing things that will engender success. It also allows us to scrutinize where we're deficient, communicate the impact that deficiency is having on our goals, and take corrective action.

You can get started with some of the AMM concepts by running an online profiling tool for your own organization or project. This will give you a picture of how you are structurally aligned today and some insight into where you might have opportunities to change.

While we don’t yet know the regulatory, capital, competitive and commercial fallout from the financial collapse and economic recession, we do know that "business as usual" is off the table. While this makes day-to-day execution challenging, it presents us with an opportunity to recast and remake IT. By focusing on results as opposed to effort, we make IT a transparent, efficient, responsive and collaborative contributor to the business. This makes IT less a supplier of technical services, and more an engaged business partner, putting it firmly at the forefront of corporate leadership. At a time when companies are navigating uncertain waters, better to be sharing responsibility at the helm than relegated to the engine room below deck.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Tuesday, February 24, 2009

Pettit - Restructuring IT: Making In-Flight Change

By Ross Pettit - 24 Febuary 2009

All businesses are undergoing unprecedented changes. Revenue forecasts aren’t materializing, capital structures are proving unsustainable, and operations are being scrutinized for inefficiencies. This, in turn, means that businesses are being completely restructured in how they are capitalized, organized, managed and governed. As businesses restructure, so will IT.

As we face restructure, we have to look critically at our IT lifestyle. We know that if we eat a poor diet and don’t exercise, we run a greater risk of health problems than if we eat a healthy diet and exercise regularly. The same applies to IT: if our work habits lack discipline, we’re going to have health problems that, in turn, put business operations at risk equivalent to heart disease or diabetes.

Agile and Lean offer IT a healthy lifestyle choice: disciplined execution of a set of best practices give greater focus, consistency and transparency to operations than we get from traditional approaches to IT. Experience in a variety of business domains tells us that we can expect significant impact by taking on Agile practices: we will reduce the probability of a catastrophic failure, defects will decline substantially, delivery times will accelerate, and the business value of what we deliver will increase.

But restructuring to be an Agile / Lean IT organization is not something that happens by management fiat. Agile IT requires that each person embrace fundamental principles that value the business problem at hand as opposed to the technical problems we concoct. This requires behavioral changes that run counter to decades of training, messaging and mentoring embedded in the IT profession. In fact, it goes to the heart of the oft-cited gap between business and IT: day-to-day IT activity is often fundamentally misguided, as people are busy solving the wrong problems. Indeed, it is not uncommon for people in IT to subserviate a very real business need to go in pursuit of speculative technological “future-proofing.”

Given the urgent need to restructure, this behavioural gap presents IT leaders with a significant challenge.

First, restructuring takes a lot of effort. If restructuring IT requires changes to fundamental work habits, we must not underestimate the amount of effort that will be needed to bring this change about. Bridging the gap between “how we work today” and “the results we must achieve given the reality we face” is not simply an exercise in tools and training. It requires a concerted effort to change the behaviours that underlie how work is done: how requirements are defined, solutions are developed, teams are organized and managed and IT is governed. This comes about through experience. Change happens within each team and department as people gain proficiency with new work habits while delivering, supporting, and maintaining solutions.

Second, IT can't shut-down while it restructures; it must be restructured while delivery work is in full flight. That means the restructuring effort itself will be subjected to change and adaptation. This makes restructuring a moving target, which both blurs the vision of the target state and wears down people’s patience and energy for the change.

Significant organizational effort spent in pursuit of a moving target will put the change leader in a constant state of conflict. On the one hand, he or she risks managing to a compromise, where long-term sustainability is sacrificed for short-term "results" (often, ironically, in the name of pragmatism). For example, a project team may elect not to introduce unit testing because the effort is believed to be too great and the business need too urgent. The team may write code faster initially, but it will prove to be a false efficiency as defects rise and time spent refactoring obliterate any gains. On the other hand, the change leader must not be dogmatic. An IT organization doesn’t exist so that it can be Lean, it exists so it can deliver business results. Too great of emphasis on process – insisting on 100% unit test coverage, for example, just for sake of having high unit test coverage – risks prioritizing process in favor of results.

Most of the challenges the change leader will face during in-flight restructuring come down to a simple, if not always obvious, test: are we reconciling the reorganization to the business demands or reconciling the reorganization to old work habits? The prior part of the test helps us be sure that dogma doesn’t trump results. The latter tells us that we must not compromise in the name of making everybody happy.

To make this decision consistently, even when the goalposts are moving, change leaders must have a clear understanding of both the business need and the goals of the restructuring. By keeping the business outcomes such as reduction of defects or accelerated time to delivery the clear priority, we bring unambiguous focus to the change effort. And that focus is critical. Every day, change will be challenged by all kinds of things: distracting and counter-productive technical objectives (e.g., “we need to solve every possible problem we may have in this and any future application that requires session management”), irrelevant, non-value-added IT practices (e.g., "we've always required effort-remaining project status reports"), and individual motivations that run counter to change (such as job preservation or people's "stationary inertia" at work). In-flight operations restructuring requires intense, unrelenting effort to swim against the tide of “how things have always been done here.” The change leader must have a clear vision for operations that is flexible to business demand but uncompromising to IT convenience if there is to be a real, durable restructuring.

The change leader must always be clear that the goal of restructuring is to have the organization executing in such a way that it sustainably yields better performance. To be sustainable, we don't just need solutions to report improved technical measures, we need to work in such a way that day-to-day project decisions do no harm, much like decisions we take in our personal lifestyles. For example, we can send code to the "technical clinic" for IT liposuction, where we invest time into remediating tight coupling, dependencies and complexity so that a team has a clean code base, free of technical debt. However, just as the stomach-stapled person may resume bad dietary habits and regain weight, so will the IT team with the remediated code base resume bad habits. This means that IT leaders must insist on a constant, independent validation that restructuring has taken root. One way to do this is to regularly scorecard team performance to scrutinize execution. Another is to make sure that organizational structures – incentives and rewards, recruiting and promotion, governance and oversight – reinforce the Agile value system. If these things are done, old habits are unlikely to return.

The pressure has never been greater on IT. Business is asking, “what are you doing for me this quarter” with increasing anticipation and scrutiny. The best guarantee that IT operations can adequately and professionally answer this question is to execute in a transparent, consistent and disciplined manner. In an uncertain business world, that’s the best form of “futureproofing” IT can pursue.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Wednesday, January 21, 2009

Pettit - Come the Hour, Come the Leaders

By Ross Pettit - 21 January 2009

It’s pretty obvious by now that we’re not in an ordinary downturn. The US Federal Reserve and the Bank of England have invested well over $1 trillion propping up their respective financial services sectors, and every indication is that there is more yet to be spent on asset purchases and guarantees. For all intents and purposes, the domestic US auto industry has failed (with, as of this writing, the notable exception of Ford Motor). The Baltic Dry index has fallen 96% from its peak and there are indications that global container shipping is effectively free. Deflationary forces – with which few of us have any first hand experience – are evident in everything including volumes, prices, salaries, staffing levels and asset values. Add to the landscape completely unexpected large-scale incidents of fraud such as Satyam and Madoff and there is no denying that we live in very challenging times, with few precedents or patterns.

The headlines today are dominated by bailouts. Before long, they’ll be about bankruptcies. This will create a new business landscape, the shape of which none of us can fully predict. One thing we do know is that before the dust settles, businesses both healthy and unhealthy will have to go about the task of restructuring.

Restructuring will extend to IT. IT can keep pace and perhaps even get slightly ahead of the trend. But that requires leadership, and leaders are in short supply in all areas of business. This is obvious from the predominance of two common attitudes: complacency and elective ignorance.

First is complacency, often bordering on denial. There are quite a few people taking a “wait and see” attitude, believing that the economic situation isn’t as bad as is being reported (lots of business is still being transacted), governments are stepping in (weren’t we taught in history class there could never be another “great depression?”), there’s been a change in US leadership, and so forth. The prevailing attitude in this camp is that the economy will work itself out and that businesses are not at wholesale risk.

There are also a fair few willing to ignore what's happening, electing to focus exclusively on execution. Since none of us control the economy, the thinking goes, better that we just get on with the things we do control, such as day-to-day execution. We need to step up our efforts and fight the good fight. Times will be tough, but putting our nose to the grindstone will see us through. Cut costs. Sacrifice. Be positive and optimistic.

In the current economic context, both of these attitudes are acts of capitulation. They substitute “hope” and “aggressiveness” for genuine leadership.

As leaders of IT organizations, we’re several steps removed from the line-of-business of our partners, making it difficult for us to be true business leaders. It also places us in a vulnerable situation: if IT is perceived as a utility (and not a strategic partner) we’re seen as a draw on revenues rather than a core competency in long-term success. The CEO and the board won’t look to us for strategic contribution as much as they’ll look for a budget request consistent with the deflationary times.

This is a frustrating environment in which to try to lead as traditional market and operational moves are acts of blind faith or wild ambition. What we can do today to provide leadership for our business partners and our teams?

  • Have a firm but malleable business vision founded in facts.
  • Pick a timeframe that can be monitored and adjusted.
  • Restructure operations for responsiveness.

Here are six practical things we can do to execute a leadership agenda.

  1. Vision: We must forecast a bottom for our business / industry. Relax: any forecast we make is going to be wrong. The point isn’t to be precicely accurate, but to define a floor against which we can reconcile and explain all of the business decisions we will face in the coming months. That floor must be something we arrive at independently, by analyzing the data (macro and micro) at our disposal. It is important that we process every bit of economic data and filter signal from noise, fact from emotion. We cannot selectively pick facts nor emotionally align with an outcome that we would like to be true. If we preface any replay of our analysis with “I believe..." we have failed this test.
  2. Timeframe: We must choose the indicators that we’ll use to monitor and adjust the timing (and the details!) of our forecast. Not only is it important that we forecast a bottom, we must get our fignernails dirty with the data. This will allow us to ascertain whether reality is evolving toward or away from our predicted future state, and give a reasonable time horizon to adjust our execution. To do this, we need data. Headliners such as the S&P 500 and FTSE 500 are interesting, but can bake in stale data such as last quarter’s sales or dividend forecast. Look to leading indicators of economic activity. For example, if we’re heavily exposed to consumer spending, we can look to the Baltic Dry index to see when global trade starts to show sustainable signs of life. Or if we're close to construction or financial services, we can look to see when the Case-Shiller index shows that house values are again consistent with rates of income growth as they were prior to 2003. Better yet, we can create our own composite index out of key indicators specific to our situation, such as the enterprise value or the percentage of toxic-to-tangible assets of our customers.
  3. Vision: We must be specific and direct with everybody – from the board, to the CEO and CFO, to our business peers, to everybody in our teams – about what the data is telling us. Does it appear that our business is shaping up to be a predator, a buyout target, or in a different business entirely? Figure out the business needs that IT must satisfy to support the company that will emerge on the other side, and make sure we are in full agreement with our busienss partners. We must articulate a decoupled vision so that we can explain and focus efforts on the right problem at the right time. We can encourage people to take the initiative in acquiring new skills in alignment with that vision. The data may be negative and it may not make people very happy, but how we act and prepare, and how we explain our actions and preparations, inspires confidence more than all the rah-rah optimism in the world will ever achieve.
  4. Restructuring: We must change the way we work to maximise responsiveness. We cannot predict when recovery will happen or what form that recovery will take. Cutting costs to withstand a downturn in the hope that recovery is around the corner (and with it a return to business as usual) is not leadership. Being sustainably responsive to whatever the economy, the market, governments and the competition deals us is leadership. We can act very boldly to eliminate situational complexity, unaligned gatekeepers, and any other obstacles that make it difficult to get things done. We can also look very closely at Lean principles to not only eliminate waste but to make sure effort is directed toward results.
  5. Restructuring: We must take a long, hard look at the portfolio for any self-targeted missiles. The one thing that will undermine our leadership in the eyes of our business partners is if we are blindsided by something that is in our control. Especially in difficult times, people will do things to contain bad news in the fear of losing their jobs, so we must be vigilent: do we have any projects that could surprise us with a spectacular collapse? Is Bernie Madoff one of our project managers (or worse, our project portfolio manager?) We need to find out now. Right now. We can do this by bringing unrestricted transparency to our projects.
  6. Restructuring: We must identify the top 3 capabilities that will be the most valuable in our future and patiently pursue them. Perhaps our organization has a deficiency in project management, or we envison changing from a custom appdev shop into an integration shop. We can take some long but lightweight positions in these different capabilities. For one thing, we can look internally and identify the people in whom we want to invest for skill development. We can look externally as well: labor supply outweighs demand, so this is an opportune time to advertise for new hires. We can also partner for capability: plenty of firms are coming to grips with the same set of challenges, so there's no need to go it alone. Above all else, we must take our time and make sure we get the right people under the right circumstances. And we mustn't assume that we'll recognize the right people or the right circumstnace especially if we've no experience or a poor track record of acquiring it. We must invest in developing interviewing techniques, and be aggressive but fair in defining terms and opportunities for partners.

This isn't easy to execute. A lot of people in IT lack business fundamentals and may struggle to understand our goals and objectives. We will make mistakes and suffer setbacks. And it can be difficult to explain to people engaged in daily firefighting why this demands attention. But going off in a fit of blind execution is economic "trench warfare," a tactic that has a history of unpleasant consequences. Survival may be at the front of our minds, but as leaders we must be able to articulate a future that is more than just survival. Many IT organizations "survived" the downturn of 2001-3 but never fully "recovered," underperforming for their business partners in the years that followed. We can't ignore threats to survival, but we must restructure and reorganize with informed preparedness, leading toward some vision - even if a bit inconclusive on the details just yet - of a transformed destination. To realize that vision, we're going to need every oar in the water, so it is best that we treat our people with respect and give them full disclosure of our vision and expectation, and the opportunity to take the initiative in sharing in its fulfillment.

At a time when a lot of businesses are on fire, this sounds like a lot of etherial work. And there is no denying that we’re in for a lot of long days and long nights, sacrifices and hard decisions just to stay on top of operations. But this isn’t the time to be tactical. Playing the hand we're dealt and hoping for the best, or simply executing pell-mell in the expectation that something good will come of it, abdicates leadership at the time it is needed the most. Businesses don’t need administrators doubling down on the same techniques, they need leaders ready to invent and innovate in a different and as of yet unknown commercial landscape. Making the effort to shape our situation relative to an informed, forward-looking business context will give us that much more of an opportunity to determine our futures.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Wednesday, November 19, 2008

Pettit - States of IT Governance

By Ross Pettit - 19 November 2008

As we head into a period of economic uncertainty, one thing we can count on is that IT effectiveness will be called into question. This isn’t so much because IT has grown excessively large in recent years, but because of results: industry surveys still show that as many as 60% of IT projects fail outright or disappoint their sponsors. As we head into a downturn, executives may look to IT to create business efficiency and innovation, but they won’t do that until they have scrutinised spend, practices and controls of IT.

This makes the need for good IT governance more urgent.

Governance is a curious concept. Governance doesn’t create value, it reduces the likelihood of self-inflicted wounds. It’s taken for granted, and there isn’t a consistent means to show that it actually exists. It is conspicuous only when it is absent, as it is easier to identify lapses in governance than disasters averted. And we tend not to think of an organisation as having “great” governance, but of having “good” or “bad” governance.

This strongly suggests that “good” is the best that governance gets. It also suggests, as the nursery rhyme goes, that when it is bad, it is horrid.

Any critical examination of our governance practices will focus on what we do poorly (or not at all) more than on what we do well. But rather than cataloging shortcomings, it is better to start by characterising how we’re governing. This will give us an opportunity to not only assess our current state, but forecast the likely outcome of our governance initiatives.

To characterise how we govern, we need some definition of the different types or states of governance. To do that, we can categorize governance into one state of “good” and multiple states of "bad."

We'll start with the bad. The most benign case of “bad” governance is unaligned behaviour. There may be guiding principles, but they're not fully imbued in day-to-day decisions. Individual actions aren't necessarily malicious as much as they are uninformed, although they may be intentionally uninformed. Consider an engineer in a Formula1 team who makes a change to the car, but fails to take the steps necessary to certify that the change is within regulation. This omission may be negligent at best, or a "don't ask / don't tell" mentality at worst. This first category of “bad governance” is a breakdown of participation.

The next category is naivety. Consider the board of directors of a bank staffed by people with no banking background. They enjoyed outsized returns for many years but failed to challenge the underlying nature of those returns.1 By not adequately questioning – in fact, by not knowing the questions that need to be asked in the first place – the bank has unknowingly acquired tremendous exposure. This lapse in rigor ultimately leads to a destruction of value when markets turn. We see the same phenomenon in IT: hardware, software and services are subjected to a battery of well-intended but often lightweight and misguided selection criteria. Do we know that we're sourcing highly-capable professionals and not just bodies at keyboards? How will we know that the solution delivered will not be laden with high-cost-to-maintain technical debt? Naïve governance is a failure of leadership.

Worse than being naïve is placing complete faith in models. We have all kinds of elaborate models in business for everything from financial instruments to IT project plans. We also have extensive rule-based regulation that attempts to define and mandate behaviour. As a result, there is a tendency to place too much confidence in models. Consider the Airbus A380. No doubt the construction plan appeared to be very thorough when Airbus committed $12b to the program. During construction, a team in Germany and another team in France each completed sections of the aircraft. Unfortunately, while those sections of the aircraft were "done", the electrical systems couldn’t be connected. This created a rather large, expensive and completely unanticipated system integration project to rewire the aircraft in the middle of the A380 program.2 We see these same phenomenon in IT. We have detailed project plans that are surrogates for on-the-spot leadership, and we organise people in work silos. While initial project status reports are encouraging, system integration or quality problems seemingly appear out of nowhere late in development. Faith in models is an abrogation of leadership, as we look to models instead of competent leaders to guide behaviour toward results.

Finally, there is wanton neglect, or the absence of governance. It is not uncommon for organisations to make optimistic assumptions and follow through with little (if any) validation of performance. Especially at the high end of IT, managers may assume that because they pay top dollar, they must have the best talent, and therefore don’t need oversight. People will soon recognise the lack of accountability, and work devolves into a free-for-all. In the worst case, we end up with a corporate version of the United Nation’s oil for food program: there's lots of money going around, but only marginal results to show for it. Where there is wanton neglect of governance, there is a complete absence of leadership.

This brings us to a definition of good governance. The key characteristics in question are, of course, trust and competent leadership. Effective governance is a function of leadership that is engaged and competent to perform its duties, and trustworthy participation that reconciles actions with organisational expectation. Supporting this, governance must also be transparent: compliance can only be built-in when facts are visible, verifiable, easily collected and readily accessible to everybody. This means that even in a highly regulated environment, reaction can be swift because decisions can be effectively distributed. In IT this is especially important, because an IT professional – be it a developer, business analyst, QA analyst or project manager – constantly makes decisions, hundreds of times over the life of a project. Distributed responsibility enables rapid response, and it poses less of a compliance risk when there is a foundation of trust, competent leadership, and transparency.

This happy state isn’t a magical fantasy-land. This is achievable today by adhering to best practices, integrating metrics with continuous integration, using an Agile-oriented application lifecycle management process that enables localised decision-making, and applying a balanced scorecard. Good IT governance is in the realm of the possible, and there are examples of it today. It simply needs vision, discipline, and the will to execute.

In the coming months, we are likely to see new national and international regulatory agencies created. This, it is hoped, will provide greater stability and predictability of markets. But urgency for better governance doesn't guarantee that there will be effective governance, and regulation offers no solution if it is poorly implemented. The launch of new regulatory bodies - and the actions of the people who take on new regulatory roles - will offer IT a window into effective and ineffective characteristics of governance. By paying close attention to this, IT can get its house so that it can better withstand the fury of the coming economic storm. It will also allow IT leaders to emerge as business leaders who deliver operating efficiency, scalability and innovation at a time when it's needed the most.

1 See Hahn, Peter. “Blame the Bank Boards.” The Wall Street Journal, 26 November 2007.

2 See Michaels, Daniel. “Airbus, Amid Turmoil, Revives Troubled Plane” The Wall Street Journal, 15 October 2007.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Wednesday, October 22, 2008

Pettit - Volatility and Risk of IT Projects

by Ross Pettit - 22 October 2008

When we plan IT projects, we forecast what will be done, when we expect it will be done, and how much it will cost. While we may project a range of dates and costs, rarely do we critically assess a project’s volatility.

Volatility is a measure of the distribution of returns. The greater the volatility, the greater the spread of returns, the lower the expected value of the investment.

In finance, volatility is expressed in terms of price. In IT projects, we can express volatility in terms of time, specifically the length of time that a project may take. Time is a good proxy for price: the later an IT asset is delivered, the later business benefit will be realised, the lower the yield of the investment.

Using Monte Carlo simulation we can create a probability distribution of when we should expect a project to be completed. DeMarco and Lister’s Riskology tool calculates a probability curve of future delivery dates. It allows us to assess the impact of a number of factors such as staff turnover, scope change, and productivity on how long it will take to deliver. In addition to projecting the probability of completion dates, the Riskology tool will also calculate the probability of project cancellation. The inclusion of fat tail losses makes this a reasonably thorough analysis.

Monte Carlo simulation gives us both an average number of days to complete a project and a variance. Using these, we can calculate the coefficient of variation to get an indicator of project variability.

Standard Deviation (days)
CV = ------------------------------------------------

Average (days) to complete the project

The coefficient of variation is a metric of project variance. The higher the CV, the greater the volatility, the greater the risk there is with the investment.

Historical project data allows us to put a project's CV in context. Suppose we have metrics on prior projects that show the projects IT does for Accounting have an average variance of 1 day late for every 10 days of plan, with a variance of 0.5 days. Suppose further that a new project expected to last 6 months is forecast to be 6.5 days late with a variance of 3 days. Somewhat akin to assessing the Beta (although it is not calculated entirely from historical data), this comparison allows the IT project investor to ask piercing questions. What makes us think this project will be different from others we have done? Why do we think the risk we face today is so much different than it has been in the past?

It is appealing to think that by supplying some indication of the volatility and Beta we’re giving investors in IT projects a complete picture, but there are limits to what this tells us. Our risk assessment is only as good as our model, and our model will be laden with assumptions about people, capabilities, requirements and opportunity. What if we fail to completely identify and properly anticipate both probability and impact of the risks that we are so carefully modeling? Come to think of it, isn’t this what got Wall Street into so much trouble recently?

Risk management is a primitive practice in IT. All too often, it is little more than an issue list with high/medium/low impact ratings. Typically, risks are neither quantified nor stated in terms of impact to returns. Worse still, because they’re simply lists in spreadsheets, they’re often ignored or denied. This means that the IT project manager is focused on internal risks, such as technology challenges or days lost due to illness. They don't pay much attention to external risk factors such as a regionally strong tech economy that threatens to lure away top talent, or a supplier teetering on the edge of receivership. By quantifying the impact of risks – both individually and collectively – the project manager comes face-to-face with the broader environment on a regular basis.

Focusing on volatility also makes it obvious that the “on time and on budget” goal is achieved not so much by what goes right as much as by what doesn’t go wrong. That, in turn, strongly suggests that being "on time and on budget" is a misdirected goal in the first place. It also suggests that traditional project management has less to do with project success than we might think.

Let’s consider why. Traditional IT manages to the expectation that everything can and should go according to plan. Changes in scope, changes in staff, and even mistakes in execution are treated as exceptions. Traditional IT management looks to reconcile all variations to plan and will attempt to do so even when the sum of the variations – 100% staff turnover, 100% requirements change – creates a completely new project. The traditional IT project manager is trying to maximise effort for available budget: “the project will be on time, what might go wrong to make it late, and what must we do to compensate.”

By comparison, Agile project management expects change. It focuses on reducing volatility by making deliveries early and often. The Agile project manager is trying to maximise yield given available time: “the project will be late, what can we do to bring it forward, and what can we do to reprioritise.” This is a subtle - but critical - shift in thinking that moves risks from the fringe to the center of the frame.

Risk analyses are supplements to, not replacements of, good management. Risk models are abstractions. No risk model will capture every last exposure. Assumptions in underlying numbers will be wrong or arrived at inexpertly. Worse still, they can be gamed by participants to tell a particular story. All this can lead to poor investment decisions and ultimately investment failure. Still, while working in blind adherence to risk models is hubris, working without them is reckless. Investment managers (or in IT, project managers) are paid for their professional judgment. Investors (IT project sponsors) are responsible to audit and challenge those entrusted with their capital. What managers and sponsors do pre-emptively to ensure project success is far more complete when risk analyses are brought to bear.

The higher the expected yield of an IT project, the more important it is to be aware of volatility. If volatility isn't quantified, the project lacks transparency, and its investors have no means by which to assess risk. This leads to a misplaced confidence in IT projects because long-term success is assumed. If the recent financial turmoil has taught us anything, it's that we can assume nothing about our investments.

Wednesday, September 17, 2008

Pettit - Is Your Project Team "Investment Grade?"

by Ross Pettit - 17 September 2008

One of the most important indicators of risk in debt markets is the grade (or collateralized debt obligations. Despite the controversy, the rating agencies remain the authority on assessing credit quality. Their impact on AIG's efforts to raise capital this week indicates how much market influence the rating agencies have.

There are several independent companies that assess the credit quality of bonds. The bond rating gives an indication of the probability of default. Although the bond is what is rated, the rating is really a forecast of the ability of the entity behind the bond – e.g., a corporation or sovereign nation – to meet its debt service obligation.

Each rating firm uses a different and proprietary approach to assess credit quality, involving both quantitative and qualitative factors. For example, bond ratings by Moody’s Investors Service reflect long-term risk consideration, predictability of cash flow, multiple negative scenarios, and interpretation of local accounting practices. In practical terms, this means that things such as macro and micro economic factors, competitive environment, management team, and financial statements are all factors in determining the credit worthiness of a firm.

Rating agencies are subsequently able to characterise the risk of debt investments. An investment grade bond will have lower yield but offer higher safety (that is, lower probability of default). A junk bond will have higher yield but lower safety. Beween these extremes are intermediate levels of quality: a bond that is rated AA will have very high credit quality, but lower safety than a AAA bond, while a bond rated at A or BBB, while still investment grade, indicates lower credit quality than a AA bond.

This concept is portable to IT. Just as the entity behind a bond is rated, a team behind an IT assets under development can be rated for its “delivery worthiness.” The difference is that we look to the rating not as an indicator of the risk premium we demand, but as a threat (and therefore a discount) to yield we should expect from the investment.

To rate an IT team, we can look at quantitative factors, such as the raw capacity of hours to complete an estimated workload, variance in the work estimates, and so forth. But we also need to look to qualitative factors. Consider the following:

  • Are we working on clear, actionable statements of business need? Are requirements independent statements of business functionality that can be acted upon, or are they descriptions of system behaviour laden with dependences and hand-offs?
  • Are we creating technical debt? Is code quality good, characterised by a high degree of technical hygiene (is code written in a manner that it can be tested?) and an absence of code toxicity (e.g., code duplication and cyclomatic complexity?)
  • Are we working transparently? Just as local accounting practices may need to be interpreted when rating debt, we must truely understand how project status is reported. Are we managing and measuring delivery of complete business functionality (marking projects to market), or are we measuring and reporting the completion of technical tasks (marking to model) with activities that complete business functionality such as integration and functional test deferred until late in the development cycle?
  • Are we delivering frequently, and consistently translating speculative value into real asset value? In the context of rating an IT team, delivered code can be thought of synonymously with cash flow. The more consistent the cash flow, the more likely a firm will be able to service its debt.
  • Are we resilient to staff turnover? Is there a high degree of turnover in the team? Is this a “destination project” for IT staff? Is there a significant amount of situational complexity that makes the project team vulnerable to staff changes?

At first glance, this may simply look like a risk inventory, but it’s more than that. It’s an assessment of the effectiveness of decisions made to match a team with a set of circumstances to produce an asset.

There are few, if any, absolute rules to achieving a high delivery rating. For example, assigning the top talent to the most important initiative may appear to be an obvious insurance policy for guaranteeing results. But what happens if that top talent is bored to tears because the project isn't a challenge? Such a project – no matter how much assurance is given to each person that they are performing a critical job – may very well increase flight risk. If that materialises, the expectation for returns of that project will crater instantly. If it's not expected, a team can appear to change from investment grade to junk very quickly.

While the rules aren’t absolute, the principles are. An IT team developing an asset expected to yield alpha returns will generally be characterised as a destination opportunity offering competitive compensation, operating transparently with actionable requirements, maintaining capability "liquidity" and a healthy “lifestyle,” and delivering functionally complete assets frequently to reduce operational exposure. All of these are characteristics that separate a team that is investment grade from one that is junk.

While these factors are portable across projects they may not be identically weighted for every team. This doesn’t undermine the value of the rating as much as it means we need to be acutely aware of the circumstances that any team faces. This also means that assessing the delivery worthiness of a team is borne of experience, and not a formulaic or deterministic exercise. While the polar opposites of investment-grade and junk may be clear, it takes a deft hand to recognise the subtle differences between a team that is worthy of a triple-A rating and one that is worthy of a single A, and even why that distinction matters. It also requires a high degree of situational awareness – employment market dynamics, direct inspection of artifacts (review of requirements, code, software), and certification of intermediate deliverables – so that the rating factors are less conjecture and more fact. Finally, it is an exercise to be repeated constantly, as the “market factors” in which a team operates – people, requirements, technology, suppliers and so forth – change constantly. This is consistent with how the rating agencies bring benefit to the market: they are not formulaic, they spend significant effort to interpret data, and they are updated with changing market conditions.

FDIC Chairman Sheila Bair commented recently that we have to look at the people behind the mortgages to really understand the risk of mortgage-backed securities. With IT projects, we have to look at the people and the situations behind the staffing spreadsheets and project plans. IT is a people business. We can measure effectiveness based on asset yield, but we are only going to be as effective as the capability we bring to bear on the unique situation – technological, geographical, economic, and even social-political – that we face. Rating is one means by which we can do that.

Investors in financial instruments have a consistent means by which to assess the degree of risk among different credit instruments. IT has no such mechanism to offer. Just as debt investors want to know the credit worthiness of a firm, so should IT investors know the delivery worthiness of their project teams.

Especially when alpha returns are on the line.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Friday, July 25, 2008

Pettit - Are You Marking IT Projects to Market, or Meltdown?

by Ross Pettit - 25 July 2008

Capital markets firms are holding a lot of paper – primarily collateralized debt obligations – that have been subjected to massive writedowns in recent quarters. Among the reasons for the writedowns is the difficulty in valuing the assets that back the paper. These securities are not simple bonds issued against a single fixed income stream, they're issued against collections of assets, sometimes spanning classes: a bond may be backed by a pool of jumbo California residential mortgages as well as a pool of credit card debt from cardholders all over the United States. This makes it difficult to assess risk of the bonds: all the moving parts mean there could be little – or substantial – exposure to different types of market dynamics, obfuscating the risk premium these securities should yield.

Assets like this have been valued – or marked – to a model. In turbulent markets, the models quickly show their limitations because they fail to account for events outside of the rigid, static universe codified by the model. Mass dumping of the securities (and concomitant evaporation of buyers), mass defaults of the underling assets (e.g., mortgage defaults), and loss of market value for the underlying assets (e.g., declining residential home prices) are rarely baked into valuation models. When these events do occur, confidence erodes quickly and the market for these securities declines precipitously.

This forces those holding these assets to make one of two decisions. One is to sacrifice liquidity by holding the paper to maturity. By taking a long position and expecting that the investments will pay off (itself an act of finger-crossing) the holder is able to keep it on their books at a high value. Unfortunately, they can’t trade the position or use it as collateral for short-term credit because no counterparty will value the asset as highly as the model does. The other option is to sacrifice value by accepting a market price for the position: if the holder needs liquidity, they must mark the asset down to reflect what the market will pay for it.

Traditionally managed IT projects are not materially different. They are opaque investments that stifle organizational responsiveness.

Project plans consist of technical tasks (“create database tables” and “create QA plans”) collected into abstract phases of execution (“technical infrastructure” and “detailed design”). Because there is at least one degree of separation from task to business deliverable, traditionally managed IT projects are inherently opaque. They also assume that the business is taking a long position: the asset under construction isn’t designed to come together until “maturity” of the project.

Progress is measured as the percentage of abstract tasks that have been completed, and this is where the trouble begins. This is marking to model. Project plans are just models, and they typically don’t take meta-risks into account: e.g., the sudden exit of people, or a change in business priority. Worse, the work remaining isn’t necessarily a linear expenditure of effort because they’re things that have not yet been performed, and they assume a high degree of quality from previous deliverables. As a result, traditional IT tends to overstate the value of an investment. If we shut down a project that alleges to be 65% complete will not have 65% of its promised functionality; we will have far less. We may have performed a lot of technical work, but we can't run the business on technical tasks. This is, in effect, a market writedown on that project. By marking to model, we’re marking to myth.

There is a better approach.

Agile practices require that we continuously deliver small units of functionality. To do this, we must express requirements as short statements of business need, create and automate execution of unit tests with the code delivered, build the project binary and release it to an environment continuously. By doing these things, we are completing units of business functionality, not just completing an inventory of technical tasks. Progress is measured by functionality delivered. This means that Agile projects mark to market: if we shut down an Agile project, we face no write-down.

Many argue against marking to market. A thin market doesn’t mean investments are without value: provided the underlying assets don’t default, securities issued against them can provide significant return if held to maturity. So it goes with IT traditionalists: IT projects require a lot of purely technical work that make incremental deliveries difficult to make. They must be “held to maturity” to achieve their payoff.

Common sense tells us otherwise. The absence of a market for financial instruments tells us that there are tremendous risks and uncertainties in the assets themselves. This means there is little appetite for them outside of highly adventurous capital, and we expect low asset values with steep risk premiums. The problem with long IT investments lies in the faith placed in their deterministic valuation when so much is simply unknown. IT projects – particularly those capable of driving outsized returns – are not exercises that can be solved by elaborate planning. They can only be solved by incrementally coming to grips with change and uncertainty.

Long positions restrict financial agility if we have to mark down the value of a financial position to get capital out of it. So it is with IT: taking long positions in IT projects bind people to projects for long periods of time. This makes us capability illiquid across our IT portfolio. We face a writedown should we need to move people from one project to another to prop up performance.

“Models are by definition an abstraction of reality.”1 By marking to model, we may very well end up with a rapid deterioration of a long position. Catastrophic project failures are never at a loss for models: by repeatedly grafting models on top of highly dynamic situations, they’re wantonly marking to a meltdown. Confidence in assertions should never pass for facts of actual performance. By marking projects to market we know the value and understand the volatility of our IT investments.

1 Carr, Peter. "The Father of Financial Engineering." Bloomberg Markets Magazine. April 2008.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.