Thursday, June 4, 2009

Ververs - Generation Y R.I.P.? Not so Fast!

by Carl Ververs - 4 June 2009

Gleefully, various business publications are proclaiming the death of Generation-Y and their irreverent, anti-business shenanigans. Sure, their weird, hyper-connected ways got an unlikely candidate elected as president. But it just CAN’T BE that those lazy Twittering Facebookers get away with it while the rest of us had to pipe down, see our heroes stripped of their crowns and watch helplessly as our iconic businesses were exposed as mismanaged or fraudulent.

Or can it?

In unapologetically pro-business, anti-Millennial magazines, their editors have not hesitated to take an advance on the humbling impact of the economy on Gen-Y. The Economist, a more objective but no less pro-business periodical, did a 180, usually writing that Gen-Y’s attitudes had to change but later acknowledging that their preferred way of working should be embraced (Managing the Facebookers). In a subsequent article, they wrote that because of the downturn, Millennials took a humbler tone when interviewing, but were not giving up on many of their other traits (Gen-Y goes to work).

Their uniqueness is waning because they are being squashed by economic realities. But we would do well to not overlook the positives from the Gen-Y mindset.

Everybody is saying, "The fun is over" and that the Gen-Y people need to figure out that work means suffering. Fear has us desiring a simpler and more predictable way of living and working, so we're seeing people revert to the behaviors they know. But the way of work and life of yore may not apply in this new world order. In fact, long before we get to any sense of "stability" again we're going to have to deal with a lot of volatility: Just look at the wild swings of the world markets. Ironically, Gen-Y’s orientation on life may be quite applicable now. They are possibly better prepared to deal with volatility and change. Per this article on The Big Money, Generation Y is taking the whole recession in stride.

“The Network Is The Company”

Gen Y’s leverage continues to be that employers need them more than they need employers. They do not have to make a killing financially and certainly will not sacrifice their health and lifestyle for a firm. Since companies cannot be trusted to take care of them and keep them employed, they cut companies off at the pass and keep a pool of opportunities open, in case the boss does not behave or the company does not come through. Advice for Gen-X: do the same. We can learn something from Gen Y here.

The downside of the lack of loyalty is that they can be hired away easily, are overly suspicious of authority and management and subsequently put a company’s return on investment in training and on-boarding at risk. All this may drive up labor costs even if salaries stagnate. The positive side of this is that they can be hired away easily from somewhere else, their volatility requires them to learn how to come up to speed rapidly at new jobs (which makes them deployable in other functions within your firm as well) and keeps your management honest and on their toes.

It will be interesting to see how Gen-Y ends up viewing wealth and finance. They have witnessed up-close the meltdown of the US financial establishment and the evaporating of personal wealth. Where Gen-X grew up with the notion that you’d be fine as long as you bought a house and kept investing in your 401(K), Gen-Y has come to understand the uncertain and volatile value of capital and assets. Home-ownership now has a whiff akin to teenage pregnancy: a liability with unclear upside.

A Millennial does not buy into the 80’s craze of self-help success books such as “Seven Habits of Effective People” because the people quoted as examples are either dead, in jail or in rehab. A few others have publicly abandoned their beliefs (such as “greed is good”). Of course I’m exaggerating, but it is clear that the personas and lifestyles the Boomers and to an even greater extent Gen-X were oohing and aahing over have no meaning for Gen-Y. What good is money if it corrupts you? They certainly learned the Faustian lesson that there truly is such a thing as selling your soul.

Race and socio-economic strata have also ceased to have meaning for Gen-Y. A mother told me how her daughter dismissed her questions about which race the people in her class were as irrelevant and racist.

As workplace expert Tamara Erickson points out in her book Plugged In: The Generation Y Guide to Thriving at Work, Millennials love to learn, and they’re good at it. With the right guidance they may turn out to be incredibly effective employees.




About Carl Ververs: Carl has been a business transformer through technology since the start of his career two decades ago. Always at the vanguard of new thinking and creative application of systems, he built CRM systems, used SOA and applied Agile techniques well before they were named.

Carl's technical expertise lies mainly in high-performance computing for derivatives trading and business process management. His background spans a wide spectrum, including business application specialist, hierarchical storage system architect, customer management systems designer, trading operations manager, Agile project Management coach, SOA practice lead, PMO/QA director and deputy CIO. Carl is an avid musician and composer, computer graphics artist and geopolitical pundit. He lives in Chicago with his wife and son.

Wednesday, May 13, 2009

Pettit - Governing IT Restructure

By Ross Pettit - 13 May 2009

In a previous article on change management, I outlined the urgent need to restructure IT. Corporate revenues have plummeted and demand will take a long time to recover. Capital structures have buckled and capital markets are on life support. Business operations are being scrutinized for inefficiency and businesses are deflating their cost structures through salary reductions, unpaid leave and RIFs. With this as a backdrop, it comes as no surprise that 91% of CEOs say they need to restructure the way their organizations work. IT, being core to business operations, faces restructure as well.

The restructuring we need to do in IT has less to do with org charts and budgets that direct effort, and more to do with behaviours that get results. Testing must be a team responsibility, automated and integrated into everybody’s daily activity. Our business partners must be continuously involved in solution development, to enable continuous change management and adaptive project management. IT Governance must be non-burdensome and non-invasive, tracking and connecting a variety of indicators. We have the experiences, examples, patterns, tools and technologies to do all of these things today, but bringing this about requires very basic change in how people get things done.

So how do we restructure? We follow the usual change management formula – figure out where we’re at, where we want to be, and how we’ll get there – but we pay less attention less to the tasks of restructuring, and more to the results we achieve.

Step 1: Define a Vision for Operations

The first thing to do is define a vision of operations. That vision needs to be explicitly spelled out. For example, part of our organizational vision might include an expectation such as: "solutions are subjected continuously to quality gatekeepers that validate technical, functional and non-functional completeness." Simple, assertive statements like this clearly communicate expectations for how work is done.

But more than just the “what” of the vision, we also need to make clear “why” we want to do these things. That means communicating the business impact we expect by doing these things. It also means we draw a line between “what we do” and “what it accomplishes.” For example, suppose we’re trying to get solutions into production faster. We can make it clear that we expect we’ll do that by reducing rework, having fewer non-value-added tasks, and having higher bandwidth communication among people in teams. We expect our operational vision will bring each of these about. For example, continuously executing quality gatekeepers will catch defects sooner and reduce rework, and also eliminate non-value-added tasks such as manual solution packaging and industrial-scale manual testing.

In expressing why we want to do these things, we must be specific: for example, as a result of this restructure, we expect we will be able to deliver new solutions to production every 20 business days, or we expect to reduce IT spend on solution delivery by 25%. The purpose of any restructure isn’t a new structure, but results, so we must use the language of results.

Step 2: Assess Current State

With a very specific vision in hand, we now assess the current state of how work is done, so we know the gap between where we’re at and where we want to be. This requires critically assessing both how work is done, as well as how it is not done. This generally involves artifact review, but more importantly involves facilitated group discussion among people in different roles - PMs, business people, developers, QA analysts and so forth - to identify practices and patterns.

This assessment is the single most important activity in a restructuring initiative. If we get this piece wrong, nothing else – not the vision, not the plan, not the monitoring – will matter, because all of our actions will be based on false data.

One of the most common mistakes organizations make in restructuring is to try to self-assess where they're at today. Self-assessments are inherently inflationary, because any type of assessment has performance ramifications. They make people uncomfortable, and can be outright confrontational. Self-assessing is usually hypersensitive to the politics of a situation, which creates a tremendous risk of distorting objectivity.

This means we should look to bring in somebody from the outside to perform the assessment. Having no personal investment to protect or political baggage to lug around will give them an independent perspective. In addition to looking critically at how things are done today, they can ascertain the viability of the vision, and recognize potential (and potentially undesirable) outcomes that may result from this restructure. But this takes the right facilitator: it calls for somebody experienced in business, management and technology who understands IT and can quickly understand a business context. The right person will not only waste little time in ramping-up, they’ll bring relevant experience into the process.

Another risk during assessment is denial that there are any real shortcomings in the way work is done. To overcome a false sense of operational excellence, start the assessment process by talking to its customers: perform a customer satisfaction survey or Net Promoter score of IT solution users. An unvarnished external opinion makes it a lot harder to gloss over operational shortcomings, and gets people focused on results (“how do we improve customer satisfaction”) instead blame (“you’re making it impossible for me to do my job.”)

Step 3: Define a Restructuring Plan

With current and future state now defined, we next determine the plan that will get us there. A behaviourally-centric reorganization is going to require a change in “muscle memory.” Change doesn't happen overnight, so we need to figure out the stages of organizational evolution that will allow new patterns of work to take root and become durable under stress. While we can look to patterns of organizational change to help us sequence the activities of our change initiative, the plan will be largely derived from experience. There will be some “low hanging fruit” as well as some significant challenges in the plan. We can get on immediately with the obvious stuff, but before we get too far down a path of execution, we need to come face to face with our competencies and deficiencies and call in outside help. For example, if we get blindsided by late-stage project failures, we shouldn't expect that our PMs can self-source a change to new project management practices. Recognizing the areas where we need expertise and engaging it at the right time of our restructuring initiative will get us through the change process with minimum blowback.

Step 4: Monitor the Restructuring

Finally, while we’re in process of restructuring, we need a way to monitor that we are making meaningful progress. Ultimately, we need to be cognizant of our results: are we closer to our goal of achiving faster time-to-market, or reduced operating costs? But those take time to materialize, and we need to scrutinize the underlying factors of our success or failure. Ongoing customer satisfaction or Net Promoter scores that we initiated during the assessment phase can help ascertain whether we're on the right track, but again, this isn't an elemental enough data point. Restructuring milestones such as “roles defined and staffed” and “reporting structures created” are insufficient, because they're task-based and not results-based. What we're looking for are ways to monitor behavioural change.

To do this, it helps to have a model, such as the Agile Maturity Model. Having been applied at a number of IT organizations in a variety of industries, the AMM allows us to consistently frame current and target organizational behaviours, map actions that will change those behaviours, and monitor how well we’re taking to those behaviours over time.

By using a framework that allows us to consistently and frequently assess how work is performed, we get a pretty good idea as to whether we're increasingly doing things that will engender success. It also allows us to scrutinize where we're deficient, communicate the impact that deficiency is having on our goals, and take corrective action.

You can get started with some of the AMM concepts by running an online profiling tool for your own organization or project. This will give you a picture of how you are structurally aligned today and some insight into where you might have opportunities to change.

While we don’t yet know the regulatory, capital, competitive and commercial fallout from the financial collapse and economic recession, we do know that "business as usual" is off the table. While this makes day-to-day execution challenging, it presents us with an opportunity to recast and remake IT. By focusing on results as opposed to effort, we make IT a transparent, efficient, responsive and collaborative contributor to the business. This makes IT less a supplier of technical services, and more an engaged business partner, putting it firmly at the forefront of corporate leadership. At a time when companies are navigating uncertain waters, better to be sharing responsibility at the helm than relegated to the engine room below deck.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Wednesday, April 29, 2009

Breidenbach - Getting to a Hire Level, Part Deux

By Kevin E. Breidenbach 29 April 2009

So, having read the first part of this series, you’ve looked over your organization and found that you have a good Agile process in place, quality people working for you, and a good base of subject matter experts. It turns out that the 80 hour weeks your team has been working are caused by the runaway success of your business and your team’s enthusiasm for accepting more and more stories and pushing new functionality out the door. Yes, I know, very Obamistic, but it could happen!

So what do the rest of us do?

Creating a Hiring Process

You could just throw together a job specification and rely on search firms or job boards. You could also ask the neighbor kid to prepare and file your tax return for you. Search firms do find good people, but you can’t outsource your responsibility: at the end of the day it’s your responsibility to find the right person or people.

Marketing the Position

The first thing most job candidates see is a published job description and position requirements. This is an extremely important artifact: not only does it describe what you are looking for, but it also advertises the job role. If you want the best people you need to entice them into entering your recruitment process, and what you advertise is an important part of achieving that.

If you know that a developer will have the opportunity to do greenfield development, put it in the description. If they are going to get the chance to work with some of the best minds in the business, tell them. Starting to see where this is going? This is Marketing 101.

The job requirements should be specific, but not give away too much information. For instance, suppose we know that we definitely want someone with experience of working in an Agile development team. So say just that, but don’t get into specifics such as: “must use test driven development” or “has pair programmed with Ward Cunningham”. The candidate (and sometimes the search firm) will add whatever you have specified to the resume. This is why you have to thoughtfully drill down into their experience during the interview: if they’ve never really worked in an Agile team, you'll know about it soon enough.

Improve Your Interview Success Batting Average

Every person that you bring in for an interview costs both money and time. It reduces productivity during the time they are visiting, and is an overhead cost to your business. This is true of both face-to-face and phone interviews. To make best use of increasingly scarce time, you need to make sure you are bringing in the right people in the first place.

Test, Test, Test

While you could rely on a search firm or recruiter to only supply you with resumes of people that fit your job description, we all know that rarely happens. Excuses like “JMX and JMS are the same, right?” just don’t cut it, and in the past I’ve been known to stop dealing with search firms who do things like that. But there is an easier way to know you're investing time in the right candidates: testing!

No matter what your HR department tells you, there is nothing wrong with testing candidates. Providing it is done fairly and each person applying for a particular role is given the same tests with the same constraints, you’re good to go.

My favorite test is to send a programming exercise to candidates before they are brought in for an interview. Give them a set number of days to complete it, and eagerly await the results. This will instantly give you an idea of who to bring in: if they send you one file of code, but no build script or unit tests, the resume goes in the bin. If they do send in what you consider to be a complete response, you’ve now got some very specific talking points for the interview. Quiz them on their design decisions, the patterns they used, and so forth. You’ll soon know if they actually produced the code they submitted, or had somebody write it for them.

The Day of the Interviews

So you’ve perused the resume, checked out the programming exercise and you’ve decided to bring the candidate in for an interview. Tell the candidate that they should expect to be at your location from between 1 and n hours, where n depends on how many face to face interviews you have planned.

...And more tests

Still, you still don't want to invest your staff's time interviewing a candidate if he or she is not going to be up to the task. So, once a candidate is on-site, the first thing I do is give each candidate a couple of written tests.

I have some terrible code that compiles and works, but is inefficient and poorly written. One test I ask them to do is to make the code more efficient. A second test is a basic design quiz that asks how a developer could refactor code to make it easier to unit test. Each test should take 10 minutes, but I give them 20 minutes total and let them decide how to spend the time. It doesn't take long to review the results. If a candidate fails to perform in this exercise, I won’t waste time with an interview.

The next phase is a quick pair programming exercise. First, let the candidate take to the keyboard for a while, and then sit back and be in the partner’s chair. This will give you some idea at how well he communicates, his ease at pairing and how he performs under pressure. Again, if he’s not a good fit, thank him for taking his time and send him on his way. Your interviewers can get on with their work, and they’ve not been disrupted by a poor candidate.

Face to Face Interviews

At this point, you're ready to interview. Send in your “A’s” to make sure you’re only hiring “A’s”. They should stick to questions in their area of expertise: business people shouldn’t ask technical questions as they will look unprofessional and may dissuade a good candidate from joining. However, the technical people should concentrate on technology applicable to the domain they are working in: it may be wonderful that somebody can design an elevator control system, but it's of theoretical value only if you’re building trading platforms.

Discuss the Candidate

Get everyone together to discuss the candidates who make it through the face-to-face round. The decision must not be about egos, but facts: one person who takes a dislike to a candidate shouldn’t have the decision making ability to throw him out. Also, the hiring manager shouldn’t be able to over-ride the team decision. There can be a lot of nefarious motivations in hiring decisions. I’ve seen people hired solely because they were friends with the hiring manager, and it always ends in tears.

The Golden Rule of Hiring

Hiring is not an easy task and it shouldn’t be taken lightly. As much as you need a development process in place, you must first have a formal hiring process in place. Remember also that in advertising for new hires, you are advertising your company as well. Being unprofessional through the hiring process will turn away top candidates, even in the current economic climate. Above all else, remember the golden rule: treat the process and the candidate as you would like to be treated. One way or another, all the tests, interviews and advertising - all the activities you perform in the hiring process - communicate how much respect is valued in your organization. Your next hire will respond to that most of all.




About Kevin Breidenbach: Kevin has a BSc in computer science and over 15 years of development experience. He has worked primarily in finance but has taken a few brief expeditions into .com and product development. Having professionally worked with assembly languages, C++, Java and .Net he's now concentrating on dynamic languages such as Ruby and functional languages like Erlang and F#. His agile experience began about 4 years ago. Since that time, he has a serious allergic reaction to waterfall and CMM.

Thursday, April 23, 2009

Cross - Bridging the Gap

by Brad Cross - 23 April 2009

From the feedback I've received for my technical balance sheet series, I've identified two gaps in how people understand the concept. The first gap is that this approach requires somebody to be simultaneously knowledgeable in finance and software. The second gap is that it is unclear how you can adopt some of these ideas with a very small initial investment.

The technical balance sheet ideas are simple and cheap to try out. They can help you make cross-disciplinary trade-offs about software, finance, and operations. This can involve technical people who know almost nothing about finance, finance people who know almost nothing about technical work, and business operations and project management people who may not be strong in either finance or software.

So, bridging the first gap is easy: you don't need to have a double PhD in finance and computer science in order to understand these ideas. Building software costs money, going slower costs more money, and technical debt makes you go slower. If you have a lot of assets, but those assets are over-leveraged with debt, you can end up cash flow negative with negative assets. On the other hand, if you have no debt and no assets, you also have nothing. So the other side of the equation is to build software that has high asset value. Focus on the aspects of your systems that have the highest return on investment, and do so without borrowing through shortcuts and sloppiness. The technical balance sheet is just a way to give you a mental model for thinking about the trade-offs.

This leads into the second point: this is cheap to adopt. You don't have to spend a lot of time and money to try this out. The first article on gathering metrics for technical liabilities may have been a bit intimidating because it was not clear how to create a quick balance sheet for a project unless they made a significant up-front investment. Hopefully that was cleared up in the explanation of the approach in the field guide article.

You can quickly assemble a prototype balance sheet with just a few metrics that are really simple to round up. I did this on one project in about an hour by harvesting the metrics that were already available via the coverage bundle and PMD, all of which were already running in their build.

Bootstrapping a balance sheet is not about spending a lot of time putting together a bunch of overly complex tables and metrics. You can do a few simple exercises, look at the numbers, and see if it helps you think about your trade-offs and plan of action.

As an example, on a project I was working on last year I saw that we spent 40-50% of story points on a couple areas of plumbing. After some investigation, it turned out that there was some heavyweight architecture and design in place that I was able to eliminate pretty quickly. On top of it, a lot of the code could be replaced by open source components. I also saw that that the most valuable components as far as the business was concerned were in terrible shape (bad design, lots of bugs, low test coverage.) Right away, I could see that we were over-investing in maintaining plumbing and under-investing into the parts that generate the real cash flows.

This scenario is common: an over-investment on low quality infrastructure coupled with an under-investment in the parts of the system that support the business in generating actual cash flows. The solution is to figure out how to reduce your ongoing investment into plumbing, while simultaneously focusing on how to increase your investment in the cash flow generating parts of your systems. With minimal effort, the technical balance sheet can expose where these over or under investments are. It communicates in terms everybody can understand (e.g., we get value from this area of the code, we do not get value from that area of the code.) Applying a technical balance sheet to your project can make it clear to each member of the team where their attention should not be, as well as where it needs to be, to maximize the business impact of your project.




About Brad Cross: Brad is a programmer.

Thursday, April 16, 2009

Breidenbach - Getting to a Hire Level, Part I

By Kevin E. Breidenbach - 16 April 2009

I’ve been involved in recruiting technology people for some time. Although not a recruiter, my role has often included the hiring (and less fun, the firing) of staff for a number of different organizations. I now find myself in a technology group – and when I say "group," I mean "me" – that has to embark on the task of building a team that can supply the technological solutions that the business partners need. I plan to use the knowledge I’ve learned over the past umpteen years to the job, and thought that it might be fun to impart that information on all of you.

Common Mistakes Made When Hiring People

Hiring for Hiring's Sake

Your team has been working 80-hour weeks for the past 6 months when suddenly you’re given an enormous budget to hire more people. (I know – a dream in this economy, but it's been known to happen.) Suddenly the recruitment engine kicks in full swing. Job requirements are posted, search firms are given their marching orders, and resumes flood in. Your team doubles in size in just a few months, but you’re still working 80-hour weeks. What on earth went wrong?

Did anybody look at why your team was working 80-hour weeks in the first place? Could it be that they have no process? Could it be that there was insufficient domain knowledge? Could it be that your team has a number of “net negative” people? If any of those are true, then just hiring new people is not going to solve the problem. More than likely, it'll just make it worse.

If you don’t have a process in place, new people aren’t going to be able to contribute that quickly and will get lost in the sea of misguided effort. If you don’t have sufficient domain knowledge, you'll have nobody to identify who the right people are to hire, and then teach them what they need to know once they start. If you don’t identify and remove net negative developers – people who contribute less than the work they create for other people to do – you don’t eliminate disruption within the team. This all may seem obvious, but all too often, nobody looks at the root cause of a problem before they set out to solve it by hiring.

Who's Doing the Hiring

A very wise person once told me of the “As hire As, Bs hire Cs” conjecture, and I’ve seen it in practice. Someone at the top of their game - an "A" person - is more likely to hire someone who has equal or better skills than they have, than they are to hire someone who is crap at their job. This is because highly competent people don’t see hiring other competent people as a threat, but as a way to learn. Incompetent people see hiring competent people as a short path to being laid off. Think about it: would Rex Grossman hire Tom Brady when they could be competing for QB?

An “A” candidate isn’t necessarily somebody strong in a particular skill (unless that is a requirement). It could be someone with a desire to learn, who spends their spare time researching technologies, and who has confidence in themselves. Aptitude and attitude, more than skill or knowledge, are what separate the top-tier from the middling performers.

Do I Really Need a Specialist?

I’ve heard excuses from developers at all levels about why things don't get done. “There’s nobody to design the database”; “We desperately need a GUI developer”; “My wife just had a baby and I’m on paternity leave”. Okay, that last one is valid, provided the guy is actually married and does have a newborn.

Seriously though, many of those excuses come from net negative developers who are content with knowing what they know, prefering the comfort of working in a silo to learning and growing. If they have no interest in growing their personal capabilities, do they really have any interest in growing your organization?

Think about the real need you have for different specialists. For example, do you have enough database work to warrant hiring a full time database administrator? Do you really need a specialist GUI developer when you only have two screens to produce? How much use will you really get from him, versus how much work he’ll manufacture for himself to do? Could a developer who’s eager to learn, working with some outside expertise for a bit of coaching and auditing, get the work done more efficiently? A small team of poly-skilled generalists will always outperform a large team of specialists in silos.

The fact is that some specialists only want to advance their specialization. If you don’t have that much work for them, they’ll be expensive ornaments at best, or create a perpetual (and costly) maintenance legacy at worst.

Before Hiring

Before you start hiring, get your house in order. Make sure that you really need new people, not that you have the wrong people, or have a complete lack of process. If you do hire, make sure the right people are doing the interviewing. Finally, make sure that any specialist you add will make your team stronger than it would be from having people with a wider collection of skills. The business will appreciate it if you don’t use their entire budget on useless hires. They’ll also appreciate it if your team gets more stuff done, instead of having new faces to blame.




About Kevin Breidenbach: Kevin has a BSc in computer science and over 15 years of development experience. He has worked primarily in finance but has taken a few brief expeditions into .com and product development. Having professionally worked with assembly languages, C++, Java and .Net he's now concentrating on dynamic languages such as Ruby and functional languages like Erlang and F#. His agile experience began about 4 years ago. Since that time, he has a serious allergic reaction to waterfall and CMM.

Wednesday, April 1, 2009

Cross - A Field Guide for Applying Technical Financial Statements, Part II

By Brad Cross - 1 April 2009

This article is the second in a series on putting the technical balance sheet to work. If you haven't done so already, you will want to read the first in the series.

Taking Action to Increase Equity

So, we've bootstrapped our balance sheet. Now, how to we get some of these ideas into our decision making process? If we notice something that is harming our owner's equity, how do we actually put that knowledge into practice?

It is critical to have code metrics that are actionable. A lot of tools will show you fancy charts, tables and diagrams, but few of these visual representations are actionable from a technical perspective. You need to be able to identify prioritized lists of actions. For example, Simian or CPD will sort the list of code duplication by worst offenders. Clearly, your first action is to tackle the worst offenders. If you see that your highest bug counts and lowest test coverage is in one of your most valuable components, then working on the robustness of that component is a clear action item. Often you can find a few monolithic classes where many of the issues occur; refactoring these while bringing them under test can be a simple way to achieve a radical change in your equity for that component.

Once you have defined a list of actions prioritized by impact on equity in your most valuable components, you are ready to start increasing your equity. There are other important and practical technical considerations to consider, however. As we discussed in the article on cost of carry vs. cost of switching you should be mindful of the impact of encapsulation and the dependencies among components. Start at the least dependent, but most depended upon, parts of the system: the leaf nodes of the dependency graph.

There are a couple of ways to tackle your list of actions. One is through big-bang refactoring or "clean up" efforts, and the other is through a more incremental "as you touch it" approach.

I typically prefer the incremental approach. I continue working as normal, and make the assumption that when I do a new piece of work in a target area, I am going to invest more time because I will be implementing actions from my prioritized list for increasing equity.

This incremental approach works very nicely most of the time and avoids big-bang "let's stop and refactor the world" efforts, which tend to suspend new development and rarely seem to work out well. That said, there are circumstances when it is appropriate to invest in testing and other infrastructure, because these investments can make your incremental efforts more effective. You will often run into structural issues that cause trouble. For example, you may have some monolithic piece of code in the system that everything is tightly coupled to. If this is holding up incremental progress, breaking this code apart in order to restructure the dependencies can be a sound investment. At the moment, I don't have any way to quantify this - you just need to have some people around who have significant experience working on large systems and restructuring efforts and have a pragmatic view and a good instinct for when this sort of thing is required.

Refactor, Rewrite or Replace?

In the article on the cost of carry vs. cost of switching, we made a passing mention of migration strategy.

Cost of switching is the cost associated with your mitigation strategy. When indebted code is an impediment to sustaining and evolving the functionality of your project, your mitigation strategy is how you go about transitioning away from indebted code in order to reduce or eliminate the impediments. This can entail completely replacing a component with another component, partially replacing a component, or simply small incremental refactorings that accumulate over time. Switching costs are impacted by the size and scope of the component you are replacing, the time you have to spend to find and evaluate commercial and open source alternatives, time spent on custom modifications to the replacement, and time spent on migrating infrastructure - for instance, migrating or mapping user data after switching to a different persistence technology.

Let's look at migration strategy in more concrete detail.

When we talk about migration strategy, it is usually related to refactoring, rewriting, or replacing.

First, your system needs to be structured in such a way that you make trade-offs at the component level and not system-wide. I discussed this at the very beginning of this series. It doesn't make sense to look at metrics and valuations at the component level unless you can actually trade-off at that level.

Components with a low asset value (easy to substitute) and high liabilities are good candidates for replacing or rewriting. Often you can combine replacing and rewriting by finding a way to write thin custom layers around open source components that can replace these parts of the system. This typically requires a bit of refactoring as well, in order to allow other components to use the new component, so you often end up using a bit of each approach.

Components with a high asset value (hard to substitute) and high liabilities are candidates for refactoring. Sometimes you can pull parts of these components out into new components that can be replaced or rewritten, but often this wont get you far. Typically these high value components are the core domain logic. Often the logic is more sophisticated and if you introduce new bugs while refactoring, they may be difficult to track down as they can result in subtle incorrectness rather than blatant crashing, exceptions and such. Careful and incremental refactoring is usually the way to go.

Finally, sometimes there are good reasons to rewrite an entire application stack. This might be appropriate, for example, if you are switching to an entirely new runtime and technology stack. Sometimes this is called re-platforming. Don't rewrite in the same language and technology stack just because you have written a crappy code base and it is too fragile to change anymore. When you abandon ship in that case, you lose the value of all the lessons you will learn from refactoring it. Sometimes, the best approach is to refactor your way to a rewrite.

Bridging the Gap

In the next piece, we will bridge the gap between finance and engineering. A big part of bridging this gap is noticing how each side tends to gravitate toward the asset and liability side of the balance sheet, respectively. This is where the technical balance sheet shines: allowing you to reduce ongoing investment into the plumbing aspects of your system and increase investment into your "special sauce."




About Brad Cross: Brad is a programmer.

Wednesday, March 25, 2009

Cross - A Field Guide for Applying Technical Financial Statements, Part I

By Brad Cross - 25 March 2009

So far, we have discussed a number of ways of applying financial models to software projects. We explored liabilities as an estimate of a project's technical debt. We explored assets as an estimate of the business value of software components. We explored techniques for computing equity as a function of assets and liabilities. We explored cost of carry and cost of switching as proxies for cash flows. We concluded by exploring techniques for framing tradeoff decisions based on equity and cash flow analysis.

The purpose of weighing assets against liabilities and cost of carry against cost of switching is to manage components of software projects like financial portfolios - assessing each component on its risk vs. return. The objective is to have a functional model for framing technical and business decisions about software. These models are not prescriptive. The purpose of the series so far has been to introduce the technical, financial, and process concepts.

Now that we understand these concepts, how do we put this into practice?

First of all, if your project is a complete mess, then collecting a bunch of metrics and rigorously applying financial concepts may not be appropriate at all. If you have only a handful of tests, then there really isn't much point in a careful analysis of test coverage and the distinction between unit test coverage and system test coverage. If your system is in really bad shape, your problems are often painfully obvious. That said, when it seems difficult to know where to start, the balance sheet model can help you pinpoint the worst problems and target the components that need the most attention.

I have applied this balance sheet approach in a number of different ways on my projects. I’ve used it to decide whether I should rewrite, refactor or replace un-maintainable components that carried too much technical debt. I’ve used it to inform estimates of effort: if an enhancement required modifying a component with high technical liabilities, I knew it was probably going to take longer. I’ve used it to prioritize work to maximize owner's equity by increasing priority of work in high-asset components and de-prioritizing or seeking open-source replacements for low-asset components.

I'm sure there are other ways to apply this model to software projects. For instance, if you have many different projects, programs, or business areas, you could apply this approach at different levels of granularity in the business. You could also use this way of thinking to frame decisions about green-field projects; even at startups. In fact a classic startup mistake is to pile on too much technical debt in an effort to go faster, which results in going slower, and ultimately leads to hitting a brick wall.

Step 1: Choose your metrics that define your debt

I recommend sticking to a pragmatic, lightweight spirit with this technical balance sheet approach. Use what you can get on the cheap - without much time investment. On one project, we already had access to clover, so we were able to mine its metrics "for free". We used these metrics to build the initial prototype of our balance sheet in an hour.

On my last project, we built a mind map (see below) of what we considered to be interesting metrics. We assembled the entire team to discuss what we thought were the most important aspects of the technical quality of the system (i.e. the most costly liabilities) and then identified which liabilities were measurable with current open source tools. It is interesting to note here that some aspects I typically evaluate are missing - such as code duplication. The team never mentioned this as one of the top concerns during this session.

Whatever metrics you choose, it is critical that they are actionable. A lot of tools will show you fancy charts, tables and diagrams, but few of these visual representations are actionable. You need to be able to identify prioritized lists of actions. For example, Simian or CPD will sort the list of code duplication by worst offenders; your obvious first action is to tackle the worst offenders. If you see that your highest bug counts and lowest test coverage is in one of your most valuable components, then working on the robustness of that component is a clear action item. Often you can find a few monolithic classes where many of the issues occur; refactoring these while bringing them under test can be a simple way to achieve a radical change in your equity for that component.

Step 2: Compute your financials by functional area

Here's a recap of our journey through the examples used in our discussions:

In the article on measuring technical liabilities, we constructed a table using a few simple and common metrics as indicators of technical indebtedness.

Functional Area

FX Cop

Coverage

Lines

Duplication

% Duplication

Brokers

139

1%

2984

234

8%

Data

52

31%

1450

297

20%

DataProviders

59

1%

1210

78

6%

DataServer

27

48%

1489

40

3%

Execution

7

48%

618

0

0%

FIX

27

1%

48484

39337

81%

Instruments

133

55%

12896

714

6%

Mathematics

77

56%

2551

205

8%

Optimization

25

60%

305

26

9%

Performance

2

73%

134

0

0%

Providers

36

47%

707

42

6%

Simulation

20

77%

241

0

0%

Trading

54

50%

2955

472

16%

TradingLibrary

66

29%

7035

1674

24%

Next, in the article on measuring assets and intangibles, we talked about using substitutability as a proxy for asset valuation in order to consolidate a basket of concrete metrics into an abstract relative metric.

Module

Substitutibility

Module

Substitutibility

Brokers

2

Mathematics

3

Data

2

Optimization

3

DataProviders

2

Performance

3

DataServer

2

Providers

1

Execution

3

Simulation

3

FIX

1

Trading

3

Instruments

3

TradingLogic

4

In the piece on technical equity, we show to transform our table of metrics for technical liabilities into a number that can be reasonably compared with the asset value of each component in order to derive a rough number representing technical equity and how leveraged each component is.

Component

Assets

Liabilities

Equity

Leverage

Brokers

2

3

-1

Infinity

Data

2

3

-1

Infinity

DataProviders

2

3

-1

Infinity

DataServer

2

2

0

Infinity

Execution

3

2

1

3

FIX

1

4

-3

Infinity

Instruments

3

3

0

Infinity

Mathematics

3

1

2

3/2

Optimization

3

1

2

3/2

Performance

3

1

2

3/2

Providers

1

1

0

Infinity

Simulation

3

1

2

3/2

Trading

3

3

0

Infinity

TradingLogic

4

3

1

4

Finally, in the article on cost of carry vs. cost of switching, we discussed thinking about tradeoff decisions in terms of cash flows when considering paying down more or less principal.

  • When you take on technical liabilities, you incur cash flow penalties in the form of ongoing interest payments, i.e. going slow.
  • When you pay down principal on your technical liabilities, you incur cash flow penalties in the form of payments against principal. However, these cash flow penalties are of a different nature: they come in the form of paying down principal in the short term (going slower right now) in exchange for paying less interest (going faster as the principal is paid down).
  • The notion of going faster or slower shows the connection between cash flows and time. The cost of ongoing interest payments is an incremental reduction in speed, whereas the cost of payments against principal is an investment of time in exchange for an increase in speed. Restated, there is a trade off between cash flow penalties now (paying the cost of switching) for decreased cash flow penalties in the future (reducing the cost of carry).

Based on my experience building software, I do not think that the relationship between cash flows, time, and speed is well understood. Much of the problem stems from confusion between the short and long term impact on cash flows that result from making certain tradeoffs. People cut corners under the auspices of short term speed. Often, this corner-cutting actually has the reverse of the intended effect, and can even destroy the chances of delivering. I have seen this thinking lead to the destruction of entire projects within 1 to 2 quarters.

Almost everyone will agree that a decade is long term and that taking on a lot of technical debt can be a significant risk to longevity. Fewer will agree that a year or more is long term. Very few will agree that a quarter is long term. Nevertheless, the more projects I work on, the shorter my definition of "long term" becomes with respect to technical debt. If you really look at the cash flow trade-offs that result in the relationship between time, speed, and technical debt, and you consider the compounding effect of negative cash flows that result from the debt, it becomes much less attractive to mindlessly acquire technical debt in the name of speed. It often results in going slower, even across the time horizon of a quarter or less.

So now we have some crude numbers, and we understand how to think about cash flow trade-offs. In part II, we'll present how we formulate and execute a plan to increase technical equity.




About Brad Cross: Brad is a programmer.

Tuesday, March 10, 2009

Kehoe - Smug Post-Modernisms and Other Notions We Get Wrong

By John Kehoe - 10 March 2009

I was watching Gremlins 2 with my daughter this weekend (yes, I’m a bad dad, but don’t hold the sequel against me, just the fictional violence). What strikes me about the movie is how cheesy it is. Not the plot but the technology. The video conferencing system, the voice based building controls. I particularly like the talking fire alarm system giving a history of fire, but I digress. It is a great period piece for late 80’s business and technology (Did you know that you could smoke in an office in 1990?). Yes, post modern sophistication relegated to a period piece. Such is father time.

It got me thinking in a broader context. What are we getting wrong today that will be revealed with the passage of time? We can look at the history of scientific progress. Examples abound in astronomy, biology and physics. The same can be said in social sciences, economics and politics. Up until the 1950's, the universe was thought to be quite small. Up until last year, bundled mortgages looked as a good way to diversify risk.

How do we know which horse to back? The first place to look is the ecosystem (yeah, sounds touchy feely, but it isn’t) of the technology. Diamond created the first MP3 player, a 64MB job. They did this years before Apple. Apple won the race, but why? They created a fully contained ecosystem. It consisted of a closed DRM format, content, exclusivity of content, blessing of RIAA and a logo program. It didn’t hurt that they hyped the heck out of it. Microsoft tried the same with Zune, but hasn’t had anywhere near the success. Microsoft was too late to the market and didn’t have the best marketing or industrial design (people like polished plastics and nickel alloy). The same is true with the other media players.

The ecosystem became pivotal. As a consumer, do I go with another ecosystem or do I go with iPod? My best mate abhors all things Apple (except his trusty Newton) and argues against the iPod. iTunes and iPod are closed DRM systems, the music isn’t portable to other systems, Apple locks in content providers. The arguments are similar to the Linux, Apple, Microsoft or [fill in the blank with a comperable technology] proponents or opponents. The fact remains that most people choose the iPod because it has the most mature ecosystem.

So what if there is no ecosystem? How do I pick the winner? I resort to need and simplicity. What do I need to accomplish? For instance, suppose I have a customer facing application that brings in $100 per minute. When the transaction rate slows, I lose money. I can quantify "normal," define a cost of abnormal activity and prove what additional revenue I can create with further capacity. I can determine my cost for that performance delta. It is a simple model and readily understood. It guides what the impact is, what is my need and what can I afford. It’s a good way to avoid the technology weeds.

Time makes fools of us all. We can use that to our advantage. If you don’t need technology XYZ, can’t afford it or can’t absorb it, then don’t buy it. The new classic example is BluRay v. HD-DVD. Both were expensive technologies that consumers would not absorb. The end result of a hard press by Sony lead to the capitulation of HD-DVD within a two week period in 2007. This made winners of the people who bought BluRay and the consumers that waited. Don’t mistake the initial BluRay owners as brilliant strategists: HD-DVD could have won as well. At any rate, the first adopters of BluRay paid $900 for bulky players. Better to wait for Wal*Mart to sell them for $99.95. The real winners are the consumers that sat out the battle.

So we use need and time to our advantage as best we can. We can use a contrarian perspective to the technology cycle. Think of this as the Devil’s Advocate (and yes there is a Devil’s Advocate in the Vatican). This would be considered the "B.S. detector" (a characteristic well honed by Mrs. Kehoe and applied to the auto dealer or to me asking for a 52” big screen). This leads to a skeptical mindset, a healthy maladjustment of the trusting mind.

Consider the evolution of broadband. Fifteen years ago technologists thought it essential, but prohibitive in cost (think ISDN a.k.a. “I Still Don’t Need”). We knew (or at least strongly suspected) what we could do with broadband communications: distribute information, telecommute (the real reason IT guys pushed broadband), new forms of communication, WebEx (which didn’t exist fifteen years ago), shopping, expansion of markets, outsourcing, offshoring, distributive teams, etc. The wheels come off the bus when we start standing up 100 Mbps internet, free municipal Wi-Fi and universal broadband. Why are they needed? Is to keep up with Elbonia? Why should there be a government run Wi-Fi network? If people don’t want broadband why force the build out that capacity? The sixty-four-million-dollar question is: when does a technology become valuable? Fibre to the house was goofy 20 years ago. If you have ask The Creator why he needs a starship.

Despite our best efforts, time will still embarrass us (really, the K car was brilliant idea). What has been the long term impact of Michael Jackson? He went from being King of Pop to Regent of Ridicule in short order. Will Miley Cryus be the Max Headroom of today? (I do have to claim that my daughter is not a Miley fan, I can’t be that bad of a father.) So foolishness can rule the day, but I doubt that Sir Mix-A-Lot’s Baby Got Back’ will be considered "classical music" in two hundred years. We don’t see the Sun Microsystem's ‘We puts the dot in dot com’ commercials (’99-’00) as being seen as the launch pad of corporate success, but an apex of hubris signaling the impending internet bust of ’00.

Looking at the merits of the solution in the context of its ecosystem, need, simplicity, time and our return models, we minimize our risks and bring a skeptical mindset to the hype cycle. Let's not be the next "dot" in "dot bomb."




About John Kehoe: John is a performance technologist plying his dark craft since the early nineties. John has a penchant for parenthetical editorializing, puns and mixed metaphors (sorry). You can reach John at exoticproblems@gmail.com.

Tuesday, February 24, 2009

Pettit - Restructuring IT: Making In-Flight Change

By Ross Pettit - 24 Febuary 2009

All businesses are undergoing unprecedented changes. Revenue forecasts aren’t materializing, capital structures are proving unsustainable, and operations are being scrutinized for inefficiencies. This, in turn, means that businesses are being completely restructured in how they are capitalized, organized, managed and governed. As businesses restructure, so will IT.

As we face restructure, we have to look critically at our IT lifestyle. We know that if we eat a poor diet and don’t exercise, we run a greater risk of health problems than if we eat a healthy diet and exercise regularly. The same applies to IT: if our work habits lack discipline, we’re going to have health problems that, in turn, put business operations at risk equivalent to heart disease or diabetes.

Agile and Lean offer IT a healthy lifestyle choice: disciplined execution of a set of best practices give greater focus, consistency and transparency to operations than we get from traditional approaches to IT. Experience in a variety of business domains tells us that we can expect significant impact by taking on Agile practices: we will reduce the probability of a catastrophic failure, defects will decline substantially, delivery times will accelerate, and the business value of what we deliver will increase.

But restructuring to be an Agile / Lean IT organization is not something that happens by management fiat. Agile IT requires that each person embrace fundamental principles that value the business problem at hand as opposed to the technical problems we concoct. This requires behavioral changes that run counter to decades of training, messaging and mentoring embedded in the IT profession. In fact, it goes to the heart of the oft-cited gap between business and IT: day-to-day IT activity is often fundamentally misguided, as people are busy solving the wrong problems. Indeed, it is not uncommon for people in IT to subserviate a very real business need to go in pursuit of speculative technological “future-proofing.”

Given the urgent need to restructure, this behavioural gap presents IT leaders with a significant challenge.

First, restructuring takes a lot of effort. If restructuring IT requires changes to fundamental work habits, we must not underestimate the amount of effort that will be needed to bring this change about. Bridging the gap between “how we work today” and “the results we must achieve given the reality we face” is not simply an exercise in tools and training. It requires a concerted effort to change the behaviours that underlie how work is done: how requirements are defined, solutions are developed, teams are organized and managed and IT is governed. This comes about through experience. Change happens within each team and department as people gain proficiency with new work habits while delivering, supporting, and maintaining solutions.

Second, IT can't shut-down while it restructures; it must be restructured while delivery work is in full flight. That means the restructuring effort itself will be subjected to change and adaptation. This makes restructuring a moving target, which both blurs the vision of the target state and wears down people’s patience and energy for the change.

Significant organizational effort spent in pursuit of a moving target will put the change leader in a constant state of conflict. On the one hand, he or she risks managing to a compromise, where long-term sustainability is sacrificed for short-term "results" (often, ironically, in the name of pragmatism). For example, a project team may elect not to introduce unit testing because the effort is believed to be too great and the business need too urgent. The team may write code faster initially, but it will prove to be a false efficiency as defects rise and time spent refactoring obliterate any gains. On the other hand, the change leader must not be dogmatic. An IT organization doesn’t exist so that it can be Lean, it exists so it can deliver business results. Too great of emphasis on process – insisting on 100% unit test coverage, for example, just for sake of having high unit test coverage – risks prioritizing process in favor of results.

Most of the challenges the change leader will face during in-flight restructuring come down to a simple, if not always obvious, test: are we reconciling the reorganization to the business demands or reconciling the reorganization to old work habits? The prior part of the test helps us be sure that dogma doesn’t trump results. The latter tells us that we must not compromise in the name of making everybody happy.

To make this decision consistently, even when the goalposts are moving, change leaders must have a clear understanding of both the business need and the goals of the restructuring. By keeping the business outcomes such as reduction of defects or accelerated time to delivery the clear priority, we bring unambiguous focus to the change effort. And that focus is critical. Every day, change will be challenged by all kinds of things: distracting and counter-productive technical objectives (e.g., “we need to solve every possible problem we may have in this and any future application that requires session management”), irrelevant, non-value-added IT practices (e.g., "we've always required effort-remaining project status reports"), and individual motivations that run counter to change (such as job preservation or people's "stationary inertia" at work). In-flight operations restructuring requires intense, unrelenting effort to swim against the tide of “how things have always been done here.” The change leader must have a clear vision for operations that is flexible to business demand but uncompromising to IT convenience if there is to be a real, durable restructuring.

The change leader must always be clear that the goal of restructuring is to have the organization executing in such a way that it sustainably yields better performance. To be sustainable, we don't just need solutions to report improved technical measures, we need to work in such a way that day-to-day project decisions do no harm, much like decisions we take in our personal lifestyles. For example, we can send code to the "technical clinic" for IT liposuction, where we invest time into remediating tight coupling, dependencies and complexity so that a team has a clean code base, free of technical debt. However, just as the stomach-stapled person may resume bad dietary habits and regain weight, so will the IT team with the remediated code base resume bad habits. This means that IT leaders must insist on a constant, independent validation that restructuring has taken root. One way to do this is to regularly scorecard team performance to scrutinize execution. Another is to make sure that organizational structures – incentives and rewards, recruiting and promotion, governance and oversight – reinforce the Agile value system. If these things are done, old habits are unlikely to return.

The pressure has never been greater on IT. Business is asking, “what are you doing for me this quarter” with increasing anticipation and scrutiny. The best guarantee that IT operations can adequately and professionally answer this question is to execute in a transparent, consistent and disciplined manner. In an uncertain business world, that’s the best form of “futureproofing” IT can pursue.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Wednesday, February 11, 2009

Reiser - Getting the Most out of Offshore Development

By Greg Reiser 11 February 2009

Many businesses avoid using offshore resources to implement high-value, high complexity software projects and many who try are frequently dissatisfied. In a recent survey Forrester Consulting found that nearly half (46%) of businesses were unhappy with their offshore provider for development of mission-critical applications.

This begs the question, “Is offshore development incompatible with high-value high-complexity projects?”

My experience suggests otherwise. I have worked with many talented teams that have successfully delivered high-value high-complexity solutions using a distributed development model that takes advantage of talent located in multiple locations around the world.

In this article I will briefly describe how specific practices, some associated with agile development, will help you get maximum value out of your distributed development efforts.

(Please note that I prefer to use the term “distributed development” instead of “offshore development”. This describes projects where work occurs in multiple locations. It is my experience that there are no “offshore” projects. Rather, there are distributed projects where a good bit of the work is performed offshore.)

Involve the whole team in the planning – This is counter-intuitive to those organizations that rely on up-front analysis and planning and then engage a development team (possibly a vendor) for the implementation work. This presents two problems.

First, the development team will insist on doing its own analysis in order to fully understand the business requirements document, use cases, RFP, etc. Hence, some of your up-front effort will be redundant.

Second, and more importantly, it’s during the up-front analysis and planning that the business sponsors and subject-matter-experts establish a shared understanding of the project vision, business objectives and priorities. If the development team (or vendor) is not involved in this process they will be less effective in adapting when the inevitable surprises occur.

Sorry folks, documentation just doesn’t cut it. To paraphrase General Dwight D. Eisenhower, “The plan is nothing, but planning is everything.”

Monitor the product, not the process – Many agilists criticize earned value analysis (“EVA”). Although the EVA technique has shortcomings, the basic idea of assessing project progress by the value delivered is sound. It is much better to assess progress based on functionality delivered rather than interim deliverables such as the System Design Specification (“SDS”) or worse, budget or schedule consumed.

Whether the project team is distributed or not, in-flight metrics based on implemented functionality and user feedback are significantly more reliable barometers than traditional “percent complete” metrics. How many projects have you seen that are 90% complete for half their actual schedule?

Although the SDS may be a very important and valuable artifact (I’m not one of those people who demonize all documentation), its veracity is suspect until an implementation validates it.

Admit that you have a (communication) problem – It doesn’t matter if your project methodology is agile, waterfall or something else. Communication and efficiency suffer when team members are separated by miles. It gets worse when they are separated by time zones as well. In addition to the obvious investments in communications infrastructure and practices (high-quality speakerphones, high-bandwidth networks, overlapping workday schedules, etc.), consider some not so obvious practices.

One of my favorites is the “exchange program”. Have people from each work location spend time working (not just visiting) in the other work location. I’m talking about at least two weeks at a time, working side by side with peers in the other geography. This is both an effective knowledge-sharing and team-building technique.

Don’t give in to the argument that “money saved on travel can be more effectively applied to real work”. The improvements in efficiency far outweigh the additional travel costs. On one large complex program we experienced a 200% increase in throughput after we implemented an exchange program.

I don’t necessarily advocate expensive video-conferencing equipment and services. Although it is a blessing to have access to such, the cost-benefit is not as favorable as it is for less technically sexy investments.

The bottom line is that your project plan must address the communication challenges inherent in distribution. If the necessary investments exceed the benefits of doing the project in a distributed way then don’t do it that way.

Redundant roles – A typical pattern for distributed projects is to have the project manager, subject-matter-experts (“SME”) and business analysts at the customer site and the developers and testers at an offshore facility. This works quite well for small, short-term projects, but it is sub-optimal for large, complex programs.

Concentration of expertise and/or decision-making in one location slows the project down, as the development team often has to wait a full day (best case) when certain types of impediments arise.

Having business analysts and/or SMEs close to the developers and testers improves throughput as questions are answered much more quickly. Having project managers in each location improves throughput as many decisions can be made much more quickly. Having developers at the client site improves throughput as critical defects can be resolved immediately and integration challenges can be addressed in a timely manner.

From a lean perspective, redundant roles are actually a best practice because the apparent waste in redundant roles is significantly less than the waste generated by the wait times in the more typical staffing model.

Monitor technical integrity – “Technical debt” is a metaphor developed by Ward Cunningham that describes the long-term terms costs associated with “quick and dirty” development. The “interest” on this debt is realized in the form of increasing development costs due to inflexible and/or overly complex design, excess dependencies, duplication, defects, etc.

Technical debt management is important for any project. Distributed development makes technical debt management more difficult because some, if not most of the code is developed by the offshore part of the team. Hence there is a greater risk that by the time the symptoms of too much technical debt become obvious, the cost to correct will be high.

You might replace your offshore team due to poor performance and you might even recover some of your costs. But if this really is a mission-critical project, your business will still suffer. Although you can and should apply manual techniques for monitoring the technical integrity of the software under construction, I have found that the use of automated tools to monitor indicators such as automated test coverage, adherence to coding standards, design quality and design complexity can serve as early warning signals for technical debt growth.

Using these types of tools within an automated build process that sounds an alarm when metrics fall outside specified thresholds accomplishes two things. One, they trigger investigation and potential remediation when it is still possible to do so at low cost. Two, they automatically encourage design discipline. No one wants to be singled out for violating coding standards, for not following the agreed upon testing discipline, or writing unnecessarily complex code.

Once again, a modest investment can reduce risk and saves significant costs within a relatively short period of time.

These are but a few techniques, some obvious and some not so obvious, that help make it possible to execute high-value high-complexity projects and programs via a distributed model. Look for future articles for more practical learnings from the trenches.




About Greg Reiser: Greg is a software development professional with 20+ years of experience as a developer, project manager and a consultant. He has experience in a wide range of industries including banking, insurance, publishing, logistics, healthcare and telecommunications. Greg has helped numerous enterprises deliver mission-critical solutions using advanced software development practices. He is currently focused on helping organizations get the most out of global development capabilities.