Wednesday, March 25, 2009

Cross - A Field Guide for Applying Technical Financial Statements, Part I

By Brad Cross - 25 March 2009

So far, we have discussed a number of ways of applying financial models to software projects. We explored liabilities as an estimate of a project's technical debt. We explored assets as an estimate of the business value of software components. We explored techniques for computing equity as a function of assets and liabilities. We explored cost of carry and cost of switching as proxies for cash flows. We concluded by exploring techniques for framing tradeoff decisions based on equity and cash flow analysis.

The purpose of weighing assets against liabilities and cost of carry against cost of switching is to manage components of software projects like financial portfolios - assessing each component on its risk vs. return. The objective is to have a functional model for framing technical and business decisions about software. These models are not prescriptive. The purpose of the series so far has been to introduce the technical, financial, and process concepts.

Now that we understand these concepts, how do we put this into practice?

First of all, if your project is a complete mess, then collecting a bunch of metrics and rigorously applying financial concepts may not be appropriate at all. If you have only a handful of tests, then there really isn't much point in a careful analysis of test coverage and the distinction between unit test coverage and system test coverage. If your system is in really bad shape, your problems are often painfully obvious. That said, when it seems difficult to know where to start, the balance sheet model can help you pinpoint the worst problems and target the components that need the most attention.

I have applied this balance sheet approach in a number of different ways on my projects. I’ve used it to decide whether I should rewrite, refactor or replace un-maintainable components that carried too much technical debt. I’ve used it to inform estimates of effort: if an enhancement required modifying a component with high technical liabilities, I knew it was probably going to take longer. I’ve used it to prioritize work to maximize owner's equity by increasing priority of work in high-asset components and de-prioritizing or seeking open-source replacements for low-asset components.

I'm sure there are other ways to apply this model to software projects. For instance, if you have many different projects, programs, or business areas, you could apply this approach at different levels of granularity in the business. You could also use this way of thinking to frame decisions about green-field projects; even at startups. In fact a classic startup mistake is to pile on too much technical debt in an effort to go faster, which results in going slower, and ultimately leads to hitting a brick wall.

Step 1: Choose your metrics that define your debt

I recommend sticking to a pragmatic, lightweight spirit with this technical balance sheet approach. Use what you can get on the cheap - without much time investment. On one project, we already had access to clover, so we were able to mine its metrics "for free". We used these metrics to build the initial prototype of our balance sheet in an hour.

On my last project, we built a mind map (see below) of what we considered to be interesting metrics. We assembled the entire team to discuss what we thought were the most important aspects of the technical quality of the system (i.e. the most costly liabilities) and then identified which liabilities were measurable with current open source tools. It is interesting to note here that some aspects I typically evaluate are missing - such as code duplication. The team never mentioned this as one of the top concerns during this session.

Whatever metrics you choose, it is critical that they are actionable. A lot of tools will show you fancy charts, tables and diagrams, but few of these visual representations are actionable. You need to be able to identify prioritized lists of actions. For example, Simian or CPD will sort the list of code duplication by worst offenders; your obvious first action is to tackle the worst offenders. If you see that your highest bug counts and lowest test coverage is in one of your most valuable components, then working on the robustness of that component is a clear action item. Often you can find a few monolithic classes where many of the issues occur; refactoring these while bringing them under test can be a simple way to achieve a radical change in your equity for that component.

Step 2: Compute your financials by functional area

Here's a recap of our journey through the examples used in our discussions:

In the article on measuring technical liabilities, we constructed a table using a few simple and common metrics as indicators of technical indebtedness.

Functional Area

FX Cop

Coverage

Lines

Duplication

% Duplication

Brokers

139

1%

2984

234

8%

Data

52

31%

1450

297

20%

DataProviders

59

1%

1210

78

6%

DataServer

27

48%

1489

40

3%

Execution

7

48%

618

0

0%

FIX

27

1%

48484

39337

81%

Instruments

133

55%

12896

714

6%

Mathematics

77

56%

2551

205

8%

Optimization

25

60%

305

26

9%

Performance

2

73%

134

0

0%

Providers

36

47%

707

42

6%

Simulation

20

77%

241

0

0%

Trading

54

50%

2955

472

16%

TradingLibrary

66

29%

7035

1674

24%

Next, in the article on measuring assets and intangibles, we talked about using substitutability as a proxy for asset valuation in order to consolidate a basket of concrete metrics into an abstract relative metric.

Module

Substitutibility

Module

Substitutibility

Brokers

2

Mathematics

3

Data

2

Optimization

3

DataProviders

2

Performance

3

DataServer

2

Providers

1

Execution

3

Simulation

3

FIX

1

Trading

3

Instruments

3

TradingLogic

4

In the piece on technical equity, we show to transform our table of metrics for technical liabilities into a number that can be reasonably compared with the asset value of each component in order to derive a rough number representing technical equity and how leveraged each component is.

Component

Assets

Liabilities

Equity

Leverage

Brokers

2

3

-1

Infinity

Data

2

3

-1

Infinity

DataProviders

2

3

-1

Infinity

DataServer

2

2

0

Infinity

Execution

3

2

1

3

FIX

1

4

-3

Infinity

Instruments

3

3

0

Infinity

Mathematics

3

1

2

3/2

Optimization

3

1

2

3/2

Performance

3

1

2

3/2

Providers

1

1

0

Infinity

Simulation

3

1

2

3/2

Trading

3

3

0

Infinity

TradingLogic

4

3

1

4

Finally, in the article on cost of carry vs. cost of switching, we discussed thinking about tradeoff decisions in terms of cash flows when considering paying down more or less principal.

  • When you take on technical liabilities, you incur cash flow penalties in the form of ongoing interest payments, i.e. going slow.
  • When you pay down principal on your technical liabilities, you incur cash flow penalties in the form of payments against principal. However, these cash flow penalties are of a different nature: they come in the form of paying down principal in the short term (going slower right now) in exchange for paying less interest (going faster as the principal is paid down).
  • The notion of going faster or slower shows the connection between cash flows and time. The cost of ongoing interest payments is an incremental reduction in speed, whereas the cost of payments against principal is an investment of time in exchange for an increase in speed. Restated, there is a trade off between cash flow penalties now (paying the cost of switching) for decreased cash flow penalties in the future (reducing the cost of carry).

Based on my experience building software, I do not think that the relationship between cash flows, time, and speed is well understood. Much of the problem stems from confusion between the short and long term impact on cash flows that result from making certain tradeoffs. People cut corners under the auspices of short term speed. Often, this corner-cutting actually has the reverse of the intended effect, and can even destroy the chances of delivering. I have seen this thinking lead to the destruction of entire projects within 1 to 2 quarters.

Almost everyone will agree that a decade is long term and that taking on a lot of technical debt can be a significant risk to longevity. Fewer will agree that a year or more is long term. Very few will agree that a quarter is long term. Nevertheless, the more projects I work on, the shorter my definition of "long term" becomes with respect to technical debt. If you really look at the cash flow trade-offs that result in the relationship between time, speed, and technical debt, and you consider the compounding effect of negative cash flows that result from the debt, it becomes much less attractive to mindlessly acquire technical debt in the name of speed. It often results in going slower, even across the time horizon of a quarter or less.

So now we have some crude numbers, and we understand how to think about cash flow trade-offs. In part II, we'll present how we formulate and execute a plan to increase technical equity.




About Brad Cross: Brad is a programmer.

Tuesday, March 10, 2009

Kehoe - Smug Post-Modernisms and Other Notions We Get Wrong

By John Kehoe - 10 March 2009

I was watching Gremlins 2 with my daughter this weekend (yes, I’m a bad dad, but don’t hold the sequel against me, just the fictional violence). What strikes me about the movie is how cheesy it is. Not the plot but the technology. The video conferencing system, the voice based building controls. I particularly like the talking fire alarm system giving a history of fire, but I digress. It is a great period piece for late 80’s business and technology (Did you know that you could smoke in an office in 1990?). Yes, post modern sophistication relegated to a period piece. Such is father time.

It got me thinking in a broader context. What are we getting wrong today that will be revealed with the passage of time? We can look at the history of scientific progress. Examples abound in astronomy, biology and physics. The same can be said in social sciences, economics and politics. Up until the 1950's, the universe was thought to be quite small. Up until last year, bundled mortgages looked as a good way to diversify risk.

How do we know which horse to back? The first place to look is the ecosystem (yeah, sounds touchy feely, but it isn’t) of the technology. Diamond created the first MP3 player, a 64MB job. They did this years before Apple. Apple won the race, but why? They created a fully contained ecosystem. It consisted of a closed DRM format, content, exclusivity of content, blessing of RIAA and a logo program. It didn’t hurt that they hyped the heck out of it. Microsoft tried the same with Zune, but hasn’t had anywhere near the success. Microsoft was too late to the market and didn’t have the best marketing or industrial design (people like polished plastics and nickel alloy). The same is true with the other media players.

The ecosystem became pivotal. As a consumer, do I go with another ecosystem or do I go with iPod? My best mate abhors all things Apple (except his trusty Newton) and argues against the iPod. iTunes and iPod are closed DRM systems, the music isn’t portable to other systems, Apple locks in content providers. The arguments are similar to the Linux, Apple, Microsoft or [fill in the blank with a comperable technology] proponents or opponents. The fact remains that most people choose the iPod because it has the most mature ecosystem.

So what if there is no ecosystem? How do I pick the winner? I resort to need and simplicity. What do I need to accomplish? For instance, suppose I have a customer facing application that brings in $100 per minute. When the transaction rate slows, I lose money. I can quantify "normal," define a cost of abnormal activity and prove what additional revenue I can create with further capacity. I can determine my cost for that performance delta. It is a simple model and readily understood. It guides what the impact is, what is my need and what can I afford. It’s a good way to avoid the technology weeds.

Time makes fools of us all. We can use that to our advantage. If you don’t need technology XYZ, can’t afford it or can’t absorb it, then don’t buy it. The new classic example is BluRay v. HD-DVD. Both were expensive technologies that consumers would not absorb. The end result of a hard press by Sony lead to the capitulation of HD-DVD within a two week period in 2007. This made winners of the people who bought BluRay and the consumers that waited. Don’t mistake the initial BluRay owners as brilliant strategists: HD-DVD could have won as well. At any rate, the first adopters of BluRay paid $900 for bulky players. Better to wait for Wal*Mart to sell them for $99.95. The real winners are the consumers that sat out the battle.

So we use need and time to our advantage as best we can. We can use a contrarian perspective to the technology cycle. Think of this as the Devil’s Advocate (and yes there is a Devil’s Advocate in the Vatican). This would be considered the "B.S. detector" (a characteristic well honed by Mrs. Kehoe and applied to the auto dealer or to me asking for a 52” big screen). This leads to a skeptical mindset, a healthy maladjustment of the trusting mind.

Consider the evolution of broadband. Fifteen years ago technologists thought it essential, but prohibitive in cost (think ISDN a.k.a. “I Still Don’t Need”). We knew (or at least strongly suspected) what we could do with broadband communications: distribute information, telecommute (the real reason IT guys pushed broadband), new forms of communication, WebEx (which didn’t exist fifteen years ago), shopping, expansion of markets, outsourcing, offshoring, distributive teams, etc. The wheels come off the bus when we start standing up 100 Mbps internet, free municipal Wi-Fi and universal broadband. Why are they needed? Is to keep up with Elbonia? Why should there be a government run Wi-Fi network? If people don’t want broadband why force the build out that capacity? The sixty-four-million-dollar question is: when does a technology become valuable? Fibre to the house was goofy 20 years ago. If you have ask The Creator why he needs a starship.

Despite our best efforts, time will still embarrass us (really, the K car was brilliant idea). What has been the long term impact of Michael Jackson? He went from being King of Pop to Regent of Ridicule in short order. Will Miley Cryus be the Max Headroom of today? (I do have to claim that my daughter is not a Miley fan, I can’t be that bad of a father.) So foolishness can rule the day, but I doubt that Sir Mix-A-Lot’s Baby Got Back’ will be considered "classical music" in two hundred years. We don’t see the Sun Microsystem's ‘We puts the dot in dot com’ commercials (’99-’00) as being seen as the launch pad of corporate success, but an apex of hubris signaling the impending internet bust of ’00.

Looking at the merits of the solution in the context of its ecosystem, need, simplicity, time and our return models, we minimize our risks and bring a skeptical mindset to the hype cycle. Let's not be the next "dot" in "dot bomb."




About John Kehoe: John is a performance technologist plying his dark craft since the early nineties. John has a penchant for parenthetical editorializing, puns and mixed metaphors (sorry). You can reach John at exoticproblems@gmail.com.