Thursday, July 31, 2008

Ververs - Gen-Y: An Owners Manual

by Carl Ververs - 31 July 2008

Just as the Baby Boomers are preparing for retirement and Generation X is licking its chops to take over, a new generation conflict has arisen: Generation Y is upon us.

We are told that today’s youth entering the workplace, also identified as “The Millennials,” is unmanageable. They are excruciatingly nonconformist, suspicious of authority, demand to spend their time in a meaningful, environmentally sound way and are strong believers in communal thinking.

Now hold on a minute! Isn’t that how the hippies of the late ‘60s were described? Those same unwashed idealists who are now captains of industry? Yes, spot-on. The only difference is that Generation Y is addicted to communications rather than cannabis.

Where the hippies were disenchanted with their pre-war parents’ prefab lifestyle, narrow-mindedness and geo-phobia, the Millennials are resisting their parents’ greed, excess, selfishness and arrogance. The defining events for the Boomers were the Vietnam War and the assassinations of Martin Luther King and the Kennedy’s. For Generation Y, they are the collapse of the dot-com bubble, the September 11 terrorist attacks, and the global economic recession that followed it.

So who are these Millennials anyway? Let’s take a look at the various generations of people that came before them. While the start and end years of each generation are sometimes disputed, even a rough approximation can give us some context for what shaped their psyche.

The parents of Generation Y are late Boomers, Jonesers and some Generation X’ers. One thing that these generations have in common is the unbridled drive for personal success, measured in assets and consumption. Introspective and, dare we say, self-centered as these generations are, they attempted to reinvent themselves through faux spiritualism, often ridiculed in Generation-Y movies. (One has only to think of Dustin Hoffman and Barbra Streisand as the post-hippie parents in the Ben Stiller farce, Meet The Fockers.) Another socio-economic factor they share is the loss of trust in institutions of any kind, be it government, religion, marriage and especially corporations. Corruption, scandals, mass layoffs, fraud and heinous abuse have all but broken down the pillars of western society in which the people who were parents in the 1950s believed.

As opposed to their “Greatest Generation” parents, who were for the most part unfamiliar and afraid of technology, pre-Gen-Y generations have seen modern marvels simplify and enhance their lives and remember well what it was like to write papers on a typewriter or prevent vinyl records from getting scratched. Their appreciation and awe for modern technology therefore is still great, which keeps them from fully capitalizing on all its benefits.

Finally, these generations grew up with the evolution of mass global media. Everyone from anywhere in the western world listened to the same music, watched the same TV shows and the same movies and played with the same toys. Media were limited, and content was controlled either by state-owned outlets or large international conglomerates.

So how would we expect Generation Y to behave, given these influences and rapid changes? Assuming that each generation vilifies the ones that came before it, we would expect Gen Y to reject self-centeredness, ergo be group-oriented. They should demand “meaning” over “compensation” in their occupation. We would expect them to roll their eyes at technical ineptness and associate less-than-24/7 use of gadgetry an indication of such. They should rebel against the limitation of media content set by their parents’ contemporaries, such as radio, television and record companies. And they should want to level the playing field with corporations so as to avoid the fate of their parents, who they saw work long hours only to be laid off for no other reason than to ease shareholders’ nerves. Finally, as a result of being raised by “helicopter parents” who have given them constant feedback and praise for simply being alive, we would expect them to seek a similar relationship with their managers.

Indeed, all these traits are recognized as characteristics of Generation Y. Fortune Magazine sounded the alarm bells, noting that Generation Y will be hard to handle. As a retort, the Washington Post posited that Fortune readers, a.k.a. The Man, simply can’t deal with the fact that people refuse to slave away for them. Corporate bosses have been complaining about this phenomenon since the advent of – oh, sweet irony - the Baby Boomers.

Rather than complain or bemoan the advent of “unmanageable Generation Y”, Generation Y: An Owner’s Manual will advise today’s managers on how to capitalize on the strengths of this new workforce and guide their perceived weaknesses into assets. We will examine their excessive multitasking, their connectedness, social fabric, work attitudes and socio-economic values, determine the pros and cons of these factors and evaluate some possible ways to use these traits to everyone’s benefit.

Quite possibly, we can recast Generation Y into Generation Why-Not.




About Carl Ververs: Carl has been a business transformer through technology since the start of his career two decades ago. Always at the vanguard of new thinking and creative application of systems, he built CRM systems, used SOA and applied Agile techniques well before they were named.

Carl's technical expertise lies mainly in high-performance computing for derivatives trading and business process management. His background spans a wide spectrum, including business application specialist, hierarchical storage system architect, customer management systems designer, trading operations manager, Agile project Management coach, SOA practice lead, PMO/QA director and deputy CIO. Carl is an avid musician and composer, computer graphics artist and geopolitical pundit. He lives in Chicago with his wife and son.

Friday, July 25, 2008

Pettit - Are You Marking IT Projects to Market, or Meltdown?

by Ross Pettit - 25 July 2008

Capital markets firms are holding a lot of paper – primarily collateralized debt obligations – that have been subjected to massive writedowns in recent quarters. Among the reasons for the writedowns is the difficulty in valuing the assets that back the paper. These securities are not simple bonds issued against a single fixed income stream, they're issued against collections of assets, sometimes spanning classes: a bond may be backed by a pool of jumbo California residential mortgages as well as a pool of credit card debt from cardholders all over the United States. This makes it difficult to assess risk of the bonds: all the moving parts mean there could be little – or substantial – exposure to different types of market dynamics, obfuscating the risk premium these securities should yield.

Assets like this have been valued – or marked – to a model. In turbulent markets, the models quickly show their limitations because they fail to account for events outside of the rigid, static universe codified by the model. Mass dumping of the securities (and concomitant evaporation of buyers), mass defaults of the underling assets (e.g., mortgage defaults), and loss of market value for the underlying assets (e.g., declining residential home prices) are rarely baked into valuation models. When these events do occur, confidence erodes quickly and the market for these securities declines precipitously.

This forces those holding these assets to make one of two decisions. One is to sacrifice liquidity by holding the paper to maturity. By taking a long position and expecting that the investments will pay off (itself an act of finger-crossing) the holder is able to keep it on their books at a high value. Unfortunately, they can’t trade the position or use it as collateral for short-term credit because no counterparty will value the asset as highly as the model does. The other option is to sacrifice value by accepting a market price for the position: if the holder needs liquidity, they must mark the asset down to reflect what the market will pay for it.

Traditionally managed IT projects are not materially different. They are opaque investments that stifle organizational responsiveness.

Project plans consist of technical tasks (“create database tables” and “create QA plans”) collected into abstract phases of execution (“technical infrastructure” and “detailed design”). Because there is at least one degree of separation from task to business deliverable, traditionally managed IT projects are inherently opaque. They also assume that the business is taking a long position: the asset under construction isn’t designed to come together until “maturity” of the project.

Progress is measured as the percentage of abstract tasks that have been completed, and this is where the trouble begins. This is marking to model. Project plans are just models, and they typically don’t take meta-risks into account: e.g., the sudden exit of people, or a change in business priority. Worse, the work remaining isn’t necessarily a linear expenditure of effort because they’re things that have not yet been performed, and they assume a high degree of quality from previous deliverables. As a result, traditional IT tends to overstate the value of an investment. If we shut down a project that alleges to be 65% complete will not have 65% of its promised functionality; we will have far less. We may have performed a lot of technical work, but we can't run the business on technical tasks. This is, in effect, a market writedown on that project. By marking to model, we’re marking to myth.

There is a better approach.

Agile practices require that we continuously deliver small units of functionality. To do this, we must express requirements as short statements of business need, create and automate execution of unit tests with the code delivered, build the project binary and release it to an environment continuously. By doing these things, we are completing units of business functionality, not just completing an inventory of technical tasks. Progress is measured by functionality delivered. This means that Agile projects mark to market: if we shut down an Agile project, we face no write-down.

Many argue against marking to market. A thin market doesn’t mean investments are without value: provided the underlying assets don’t default, securities issued against them can provide significant return if held to maturity. So it goes with IT traditionalists: IT projects require a lot of purely technical work that make incremental deliveries difficult to make. They must be “held to maturity” to achieve their payoff.

Common sense tells us otherwise. The absence of a market for financial instruments tells us that there are tremendous risks and uncertainties in the assets themselves. This means there is little appetite for them outside of highly adventurous capital, and we expect low asset values with steep risk premiums. The problem with long IT investments lies in the faith placed in their deterministic valuation when so much is simply unknown. IT projects – particularly those capable of driving outsized returns – are not exercises that can be solved by elaborate planning. They can only be solved by incrementally coming to grips with change and uncertainty.

Long positions restrict financial agility if we have to mark down the value of a financial position to get capital out of it. So it is with IT: taking long positions in IT projects bind people to projects for long periods of time. This makes us capability illiquid across our IT portfolio. We face a writedown should we need to move people from one project to another to prop up performance.

“Models are by definition an abstraction of reality.”1 By marking to model, we may very well end up with a rapid deterioration of a long position. Catastrophic project failures are never at a loss for models: by repeatedly grafting models on top of highly dynamic situations, they’re wantonly marking to a meltdown. Confidence in assertions should never pass for facts of actual performance. By marking projects to market we know the value and understand the volatility of our IT investments.

1 Carr, Peter. "The Father of Financial Engineering." Bloomberg Markets Magazine. April 2008.




About Ross Pettit: Ross has over 15 years' experience as a developer, project manager, and program manager working on enterprise applications. A former COO, Managing Director, and CTO, he also brings extensive experience managing distributed development operations and global consulting companies. His industry background includes investment and retail banking, insurance, manufacturing, distribution, media, utilities, market research and government. He has most recently consulted to global financial services and media companies on transformation programs, with an emphasis on metrics and measurement. Ross is a frequent speaker and active blogger on topics of IT management, governance and innovation. He is also the editor of alphaITjournal.com.

Monday, July 21, 2008

Martin - Non-Traditional IT R&D

By Michael Martin - 21 July 2008

Significant R&D efforts grab the headlines, require substantial budgets, and serve a critical role in the new technology/product continuum. However, do such efforts represent only a fraction of the R&D equation? Do the smaller, more prevalent initiatives play an equally, perhaps greater, role?


"Innovation . . . is generally understood as the successful introduction of a new thing or method . . . Innovation is the embodiment, combination, or synthesis of knowledge in original, relevant, valued new products, processes, or services.”

-- Luecke and Katz (2003)

It is difficult to derive true numbers for comparison - small R&D efforts compared to the highly publicized mega-dollar initiatives - because the smaller efforts are sometimes hidden (the critical effort could not be funded or is simply not recognized as a worthy endeavor) or poorly defined (teams are engaged in what could be termed R&D efforts, but management and/or the team do not recognize it as such).

This multi-part series will explore several characteristics of innovation, including:

  1. Definitions of R&D from a traditional perspective.
  2. Re-defining or broadening the scope of innovation to include hidden and/or poorly defined efforts.
  3. Evaluating non-traditional R&D efforts from a methodological perspective. For instance, how does methodology (i.e. XP/Agile) factor into or drive innovation? Is a particular methodology better suited to innovation?
  4. Evaluating the desirability (i.e. goodwill) and cost/benefit of innovative efforts in a non-traditional context.
  5. Providing the framework to determine if non-traditional R&D already exists (perhaps in a poorly defined way) and if it is appropriate for a particular organization or project effort.
  6. How do we manage non-traditional efforts to innovate?


"Innovation, like many business functions, is a management process that requires specific tools, rules, and discipline."

-- Davila, et. al. (2006)

Faced with deteriorating economic conditions, technology managers face tough decisions: can we afford the luxury of funding projects that hold promise but have failed (at least in the short to mid-term) to provide expected returns? Given tremendous cost pressures, technology initiatives that have uncertain returns appear ripe for the chopping block. Technology managers, unable to articulate the unrealized value of early-stage projects during a budget storm will likely be unprepared to prevent high-yield efforts from being sacrificed at the altar of cost.

This means that critical projects with the potential to deliver significant return - greater efficiency achieved through process improvement (i.e. re-architecture or re-factoring of a significant code base), or enhanced product/service offering - will never see the light of day. An effort passed over for cost reasons today could very well provide a critical budgetary boost in a time of need. The irony is that a decision to trim the budget via cuts in existing and/or pipelined initiatives can exacerbate the business situation that led to the need to cut budgets in the first place.

“The OECD Oslo Manual from 1995 suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo manual from 2005 takes a wider perspective to innovation, and includes marketing and organizational innovation. Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product).”

-- Wikipedia

So how do we provide technology managers with the tools to more effectively assess the business impact of a diverse range of technology initiatives, thus highlighting – and protecting – crucial innovative efforts?

First and foremost, and core to this series, we must have tools that define and evaluate “innovation” in our business environment. Having such tools enables technology managers to proactively understand, manage and assess emerging initiatives. Of course, these tools are only useful if they are applied from the very beginning of an initiative, and used consistently throughout. By doing so, technology managers will not be forced into making abrupt, error prone decisions at the first sign of economic crisis. This, in turn, minimizes potential for preventable business loss.

R&D initiatives have the potential to provide meaningful returns while helping to shape IT and business strategy. Managers cannot afford and should not tolerate ill-defined initiatives that aimlessly drift about. Therefore, the purpose – and business impact – of various initiatives should be clearly articulated (i.e. is a system initiative experiemental, system maintenance, user driven expansion/modification of an existing system, etc.), with benefits firmly established and ranked or prioritized against competing efforts.

However, in order to effectively evaluate across various types of IT initiatives, we must first clearly define and understand what innovation is. After gaining a sufficient degree of understanding, we (a) further define or broaden the sense of innovation to flush out those “hidden” efforts, (b) evaluate innovation from a methodological perspective, (c) evaluate the desirability (i.e. goodwill) and benefit/cost of innovative efforts in a non-traditional context, and (d) provide the framework to determine if non-traditional R&D exists. By doing this, we have the means by which we can effectively manage a non-traditional R&D effort.

“It’s worth looking more closely at each of the factors that enable innovation within an organisation: Agility, Community and Governance.”

-- Pettit (2007)

Our first step - and the subject of the next article in this series - is to achieve a greater focus and understanding of innovation. We will also begin to examine various techniques used to identify innovative efforts, along with a proposed framework that will aid technology managers in the pursuit and application of innovation.




About Michael Martin: Michael has 12 years' experience as a technical analyst, business analyst, iteration manager and project manager working on enterprise applications in a variety of industries including leasing, retail, transportation, insurance, government and print media. He also has experience in communication, security and management consulting in the government and construction sectors, and holds a Masters in Information Systems. He is currently a project/program manager working with Fortune 500 clients.

Tuesday, July 15, 2008

Kehoe - Futility Computing

By John Kehoe - 15 July 2008

For some time we’ve witnessed the push for utility computing: technologies such as server virtualization, storage virtualization, and grids that shift loads. Then there’s data source virtualization: natural language queries that retrieve a steaming heap of data from a mix of sources without being transparent about how it all got there. Sounds like the tomatoes the FDA can’t track down.

It’s best described as "Futility Computing," an idea Frank Gens of IDC came up with it in 2003.

Here's why utility computing is problematic.

First, the technologies have had a long maturity curve. Remember when a certain RDBMS vendor (who shall remain anonymous because I might need a job someday) promised the first grid capable of dynamically shifting load? We've been in pursuit of heterogeneous storage virtualization for a long, long time. Has there ever been a cluster that wasn’t a cluster-[expletive]?

Second, utility computing “solutions” are money spent on the wrong problem. The argument can be made that there are savings to be had by creating a utility structure. We save rack space, fully utilize storage, cut the electric bill and reduce HVAC requirements. We even get to do a nice little PR piece about how green we are and how we're saving the polar bears because we care. But what is the real cost? Do we have the right hardware for scalability? Can our business solutions exploit virtualization or will performance be degraded with the utility approach? What is the risk of vendor lock-in? Does the utility solution support the mix of technologies we already use, or do we need separate tools to support our mix of technologies? Is virtualization robust? Above all else, how much obscurity do we introduce with the utility model? Not only do we risk distorting our costs, but with all the jet fuel we'll need to burn flying consultants back and forth to keep the virtualization lights lit, we may not be doing the polar bears any great favors after all.

Most shops have a Rube Goldberg feel to them: applications are often pieced together and interconnected to the point where they make as much sense as an Escher drawing. IT doesn’t know the first place to start, let alone know what all the pieces are (which is why SOA and auto-discovery are pushed, but that is another diatribe). Any virtualization effort requires a complete understanding of the application landscape. Without it, a utility foundation can’t be established.

One byproduct of virtualization initiatives is the further stratification or isolation of expertise. The storage team is paid to maximize storage efficiency and satisfy a response time service level agreement (SLA). The Database Administrator (DBA) has to satisfy the response SLA for SQL. The middleware team (e.g., Java or .NET) has to optimize response, apply business logic and pull data from all sorts of remote databases, applications and web services calls. The web server and network teams are focused on client response time and network performance. Everybody has a tool. Everyone has accurate data. Nobody sees a problem.

Meanwhile, Rome burns.

Unfortunately, nobody is talking to one another. The root of the problem is that we have broken our teams into silos. That leads us to overly clever solutions: servers that automatically shift load while sorting this nastiness out, storage that shifts around in the background, or even virtual machines moving about. We never stop to ask: does any of this address our real problem, or is it just addressing the symptoms?

This brings to mind some metaphors. The first is an episode of the television series MacGyver (Episode 3 ‘Thief of Budapest’) where MacGyver has to rescue a diplomat. During his escape, the villains (the Elbonians) shoot and damage the engine of his getaway car. Mac goes under the hood to fix the engine with the car traveling at highway speed. Outrageous as it may sound, this is not too far from the day-to-day reality of IT. Of course, IT reality is much worse than this: the car is going downhill at 75 MPH on an unlit, twisty road, the Elbonians are still shooting away, and the car is on fire.

The second metaphor that comes to mind is an anti-drug television campaign that ran in the US during the 1980s. It opened with a voiceover, "this is your brain" accompanied by the visual of an egg. This was followed by another voiceover, "this is your brain on drugs" with a visual of the egg in a scaldingly-hot frying pan. The same formula is applicable to an application on utility computing: this is your application; this is your application on a stack of virtualization.

As we fiddle, we’re pinching pennies on the next application that our business partners believe will give it competitive advantage. Because we’ve discounted application performance, and supplemented that by having no means of finding where business transactions die, we’ve put functional requirements at significant risk. In fact, we’ve misplaced our priorities: we spend prodigiously on utility enablement, silo-ing and obscuring IT while simultaneously ignoring end-to-end performance. We do this because we take performance for granted: vmstat and vendor console tools are all we need, right?

Very few virtualization/utility models succeed. Those that do have common characteristics. There is a clean application landscape. People have a shared understanding of the applications. They have dedicated performance analysis teams staffed with highly capable people. They have very low turnover. Finally, they have a methodology in place to cut though the silos and pinpoint the cause of a performance problem. From personal experience, about 1.5% of all IT shops can say they have all of these things in place. That means 98.5% are underperforming in this model.

Without the right capability or environment, a utility approach is going to cause more harm than good. It is possible to get away with some virtualization deployments: one-off VMs are easy, and some degree of server consolidation is possible. But look out for the grand-theft-datacenter utility solution. It’s pretty violent.




About John Kehoe: John is a performance technologist plying his dark craft since the early nineties. John has a penchant for parenthetical editorializing, puns and mixed metaphors (sorry).

Sunday, July 6, 2008

Rickmeier/Robarts - You're Going To Need a Bigger Boat

By Mark Rickmeier and Jane Robarts - 6 July 2008

In the classic movie Jaws, an unusual and ill prepared trio set out to capture and kill a shark that is preying on the local (and apparently tasty) New England tourists. Tempted by fame, reward, and their desire to bring safety to the waters these sailors head out into the deep not really appreciating their situation. They draw up plans to hunt their prey assuming that this monster is like all other sharks they’ve faced before, greatly underestimating their challenge. Trusting that what has worked in the past will work again, they decide to use typical tools including Hooper’s trusty shark cage, which gets destroyed almost as soon as it’s lowered into the water. They travel in their tiny boat, only to find that it too will quickly become shark bait. Essentially they go after a new kind of challenge with the same old approach, and they’re lucky to only lose a third of the crew.

Companies today are similarly tempted to venture out into the unknown and take on the challenges of distributed development. It’s a beast they haven’t challenged before, but the temptation to try is too great to resist, because distributed development appears to offer two clear advantages. First, companies want to realize cost savings. Having been sold on the concept of lower marginal rates per person, they are convinced there is money to be saved. Second, they are interested in improving their turn-around time by scheduling development activities “around the clock.” IT executives are told that this can pay large dividends: critical defects found in production during the business day can be resolved overnight by an offshore team, and deployed prior to the start of the next business day.

This interest can turn into skepticism when projects become larger and more complex. Given these uncertainties, IT executives often turn to Agile or Scrum methodologies to reduce their risk when delivering these kinds of complex systems. These methodologies allow teams adapt to changing requirements, quickly respond to stakeholder feedback, and deliver working software faster, often in shorter incremental releases. However, they also require a lot of informal and high bandwidth communication between team members that is not always possible in a distributed setting. The Agile Manifesto itself calls out several key principles, many of which are directly challenged by a distributed approach:

  • Individuals and interactions over process and tools
  • Customer collaboration over comprehensive documentation
  • Responding to change over following a plan

Several development best practices face constraints within the distributed model. Developer pairing is considerably less effective without face to face interaction between team members. The limited overlap hours between distributed locations greatly reduces collaboration time, lengthens feedback loops, and reduces the team’s ability to collectively resolve issues. Communication barriers can foster trust issues between the teams, sometimes leading to “us vs. them” mentalities and a breakdown in the effectiveness of the limited communication that is possible. Lastly, self governing teams that rely on team meetings like retrospectives and daily stand ups can tend to receive input from only one of the distributed groups – either ignoring or not hearing the other team’s inputs at all.

All too often, project teams will jump into the distributed waters having been sold on the advantages of distribution without fully appreciating the challenges that await them. (Experienced practitioners can almost hear the John Williams soundtrack slowly building in the background: duh-duh. Duh-duh.) If a distributed project is approached with the same project management style, the same development practices, and the typical “roles and responsibilities” of collocated agile teams, there will be blood in the water within the first month. To drive distributed projects to success, and to realize the benefits of the distributed development model, the project team must take a modified approach.

As companies approach this challenge, an important first step is to realize that there are hidden costs: distributed development is not all savings. These projects require a larger amount of infrastructure to support the offshore effort, not only in physical hardware and software, but additional management and communication infrastructure as well. Distributed project teams will always be larger than their collocated counterparts as duplicated roles such as analysts and project managers are required to keep the two teams operating and communicating effectively. Furthermore, team members must put more time and effort into their communication to make up for the distance created in the distributed setting. These costs can be greatly underestimated and the cost savings associated with distribution can therefore be inflated.

It is also important to realize that although distributed delivery can offer certain advantages, not all projects are equally suited for distribution: some carry more risks than others. A typical project risk, such as complex integration with third party systems, will be more greatly magnified on distributed projects. In addition, there are new risks that must be dealt with, such as the inability to easily transfer knowledge. IT executives must decide if the benefits outweigh the risks before deciding to move forward.

Finally, when the team has weighed the risks and approximated the cost, they’ll still need to adjust their plans and strategy before they can have any expectation of success. To overcome communications barriers, technical challenges, and cultural hurdles, teams often have to increase contingency in the overall release plan to accommodate the associated increase in risks with distribution.

Distributed development can be a very effective and rewarding way to deliver software. It has been proven to work not only on small projects, but also on large, complex, enterprise applications. But it requires the right kind of preparation, a realistic approach to the risks and challenges the teams will face, and an understanding that what works for local teams will not always work for distributed ones.

Unless, of course, you’re willing to make a third of your team shark bait.




About Mark Rickmeier: Mark has 8 years' experience as a quality assurance tester, business analyst, iteration manager, Scrum master and project manager working on large scale enterprise applications. He has extensive distributed development experience working with project teams in Australia, India, China, Canada, the UK, and the US. His industry background includes equipment leasing, retail, insurance, healthcare and retail banking. He is currently a project manager specializing in Agile transformation and project governance with a focus on metrics and measurement.


About Jane Robarts: Jane has over 10 years of experience in custom development of enterprise applications as a developer, analyst, project manager, and coach. She has consulted on and managed projects in India, China, the UK, the US, Australia and Canada. Jane has spent most of the last several years coaching distributed teams and working directly on distributed Agile delivery projects both onshore and offshore. She has also developed and delivered many sessions on agile delivery to a variety of audiences.

Wednesday, July 2, 2008

Cross - Technical Balance Sheet, Part I

By Brad Cross - 2 July 2008

Technical Debt is a metaphor conceived by Ward Cunningham to describe the costs and tradeoffs associated with quick and dirty solutions.

Netscape offers a compelling example of excessive technical debt. The commercial battle between Netscape and Microsoft's Internet Explorer (notably the anti-trust suit) is well known. But what about the technical side of the story? Let's look at market share over time and how it corresponds to technical events.

When the first widely used version of Internet Explorer 3 came out in 1996, IE's market share was 20% compared with Netscape's 80%. By the time IE 5 launched in 1999, it had 75% of the market. Here is what Wikipedia has to say about what happened during the 3 year interlude:

"The aging Communicator 4.x code could not keep up with Internet Explorer 5.0. Typical web pages had become graphics-heavy ... which Communicator struggled to render. The Netscape browser, once regarded as a reasonably solid product, came to be seen as crash-prone and buggy. In addition, the browser's somewhat dated-looking interface didn't have the modern appearance of Internet Explorer."

"In March 1998, Netscape released most of the code base for Communicator under an open source license. The product named Netscape 5, which was intended to be the result, was never released, as managers decided that the poor quality of Netscape's code made a complete rewrite their only viable option. ... [M]any users continued to migrate to Internet Explorer, and the Netscape browser itself has largely been abandoned."

Netscape 5 was never really released commercially, and by the time they released version 6.0, it had been about 3 years since the 4.0 release. The chart tells the story of what happened to their market share during those three years. What was happening technically? Why did it take them so long to release? They rewrote from scratch.

A team not committed to quality is a team with depreciating code base: technical liabilities steadily erode equity with a high cost of carry. As the code base grows without evolutionary design, refactoring and automated testing, it gets crufty, buggy and unreliable. New development slows dramatically because the code is difficult to change and extend. Releases become inconsistent and infrequent. Eventually, like Netscape, a team can go into technical default and face technical bankruptcy. At this point, most companies will enter into a re-write. The re-write, typically carried out by the same team that spearheaded the first catastrophe, is at high risk to fail as well. What has the team learned that will materially change the outcome?

The inverse can also be true: you can have a technical asset. Clean, well designed, modular code with appropriate test coverage can be easy to maintain and extend, and it can give you a platform to rapidly add new features even as your code matures and ages. You can easily swap out entire components of the system. This kind of code base can be a competitive advantage.

A technical asset can be a business asset, but not always. The code has to actually do something useful. Pretty code is not necessarily valuable code, and ugly code is not necessarily without value. Business value is orthogonal to the code.

This notion of usefulness might seem like a gray area, but look at the root word; use. Does anybody use the code to solve their problem? Do you have unused code in your code base? Delete it.

Technical debt, like financial debt, is not always a bad thing. Leverage can be used in moderation to increase returns. Sometimes you can go a bit faster and take on some technical debt. However, you must be mindful of the debt you take on and be prepared to pay it back.

It is questionable whether a team is faster or slower with technical debt, and where the balance lies. In the short term it may appear advantageous to take on technical debt to meet a deadline, yet it is obvious that it is necessary to minimize the debt burden in order to sustain a rapid pace over time. You can pay gradually during development, or you can make a lump sum payment by temporarily stopping the development of new features.

There are different grades of debt. High grade debt helps achieve a major deadline and can be paid off immediately - similar to low risk short term sovereign notes that pay low interest rates. If there is uncertainty in architecture or design, exploration is of greater value than inaction.

Low grade debt is risky and costly to service; the technical equivalent of "junk bonds." Still, even low grade debt might have its place. As a guiding principle: don't finance your core business on junk, and be wary of the fact that many so-called short-term solutions become long-term problems. Consciously taking on technical debt that will last beyond the life of a release cycle is likely to backfire. Avoid the myth of taking on debt that you will pay down later. Later will never come.

Are you rushing to meet a deadline, taking on technical debt at high interest rates? Will you be able to service that debt when your creditors come to call? Many teams aren't. Be mindful of your debt load, and know how and when you're going to service it.




About Brad Cross: Brad is a programmer.

Spillner - Big Projects, Small Increments

by Kent Spillner - 2 July 2008

Businesses change too rapidly for monolithic IT deliveries to succeed, but large problems requiring large solutions still exist. Breaking solutions down into a series of small, frequent releases is the only way projects can adapt to succeed.


Increment or Die

There isn't time to deliver any other way. Delivery risks compound over time: business changes, technology changes, people come and go. Every day that passes between deliveries doesn't bring the team closer to the next release, it increases their odds of failure!

There isn't time to be cautious, either! How many projects survive the evolution of the business? No matter how hard team members work or how detailed the release plan, time works against us. The only survival option is to deliver early, deliver often.

Even when a project seems fixed and well-specified, teams can't expect to get things right. Consider NASA's recent Phoenix Mars Lander spacecraft. NASA is delivering new releases to the spacecraft on Mars daily! When the craft scoops up some clumpy dirt that's too big to filter through a screen, they beam up new code to wiggle the lander and shake the dirt loose. Frequent, regular delivery allows NASA to adapt without wasting time or missing opportunities.

Characteristics of Good Increments

Successful incremental delivery is the same on every project. Good increments are potentially releasable software delivered into a production environment (or replica) as small changes, at high frequency, and on a regular schedule. This definition is atomic. Failure to do any of these things is not just incrementing badly, it's putting the project at risk.

Increments must be potentially releasable. Technical debt accumulates over time and hobbles productivity. Teams can't afford to trade quality for convenience. All those quick fixes crammed into the codebase today will come back and take a bite out of productivity tomorrow. It's important to set the quality bar as high as possible as early as possible, and keep the team focused on delivering professional-grade software. Delivering potentially releasable increments also enables ultimate business agility: shipping at a moment's notice in response to changing business conditions!

Increments must be delivered into a production environment (or exact replica of one). Until changes are running in production, teams can't know what they've built. The release process also needs to be tested as much as the released software. Do whatever's necessary to get your team delivering into a production environment, and do it now!

Increments must be small. Really small. As small as one change at a time. This means that incremental delivery and continuous integration go hand-in-hand. Continuous integration is the process of automatically running the project's full end-to-end build cycle -- including all automated unit, integration, and functional tests -- while the team is working. Continuous integration gives the team frequent feedback on the health of the project as a whole, and reduces maintenance costs by finding bugs immediately when they are introduced. Running the full end-to-end quality assurance process for every change made to the product is a critical component of success. There's no such thing as too small!

Increments must be delivered with high frequency, daily at a minimum, more frequently if possible. There's no such thing as too short! Every feature that isn’t delivered to QA or production just piles onto the next. It isn’t helpful to have developers move on to new tasks before getting feedback on the previous one, or to load more work onto quality analysts than they can handle.

Increments must be delivered regularly. It's crucial to establish and sustain a rhythm for a project. That helps people in every role get into their flow, and makes it easier to spot when things go bad. Slipping schedules can be a great early warning system, but only after establishing a history of making very frequent deliveries.

Better than monoliths

Monolithic delivery is just too risky. The time between monolithic releases breeds opportunity for the competition, and creates space for business interests to diverge from the goals to which a project team is working.

Incremental delivery is an antidote to these problems because it reduces the feedback cycle to the point that teams are able to adapt instantly to changing business realities.

At RailsConf 2008, a speaker announced an open source cloud computing platform called Vertebra, which effectively killed a commercial product being developed by a company in attendance. They were unable to respond with their own product announcement since it wasn't the right time in their release cycle.

Or consider another example of a project that was written off after it was development complete. During development, the infrastructure group migrated their production environment to a different technology platform. The development team wasn't aware of the change until the final QA phase: that is, after thinking they were "done," but before going into production. If the issue was detected earlier, it would have been affordable to migrate platforms, instead of writing the whole thing off as an embarrassing loss.

Anti-patterns

Unfortunately, incremental delivery is easy to get wrong. Avoid the following anti-patterns:

  • Partial delivery: not delivering into production. Unless delivering into production, "done" is more an interpretation, less a fact. If every increment can't be delivered into production, they should be delivered into a cloned replica of production. If increments aren't even delivered into a cloned replica of production, then the team isn't delivering!

  • Infrequent delivery: more than one day between deliveries. There's no technical reason teams can't deliver on every commit, but at the very least they should be delivering daily. Otherwise, the team is asserting their work is done based on assumption. Finish the job!

  • Chunked delivery: too many changes since last release. Chunking development and QA work is a false-economy of scale. The cost of rework is amplified by latency in feedback; this completely destroys any economies of scale of chunking. Deltas need to be small enough so QA can test the entire delta before the next delivery. Feedback cycles need to be short enough so developers don't move on to the next task before their previous one is truly finished.

Embrace Agility

There is zero return to the business until software is delivered into production. Every decision on a project is subordinate to this fact, and no amount of hard work, experience or planning can change that. Incremental delivery makes teams more successful by enabling them to deliver a higher quality product sooner, roll-out updates more frequently, and respond immediately to changing business conditions. Stop hitting yourself: embrace agility, deliver incrementally.




About Kent Spillner: Kent loves writing code and delivering working software. He hates cubicles, meetings and Monday mornings.

Ververs - Deutilization

by Carl Ververs 2 July 2008

As long as IT has been around, IT professionals have complained about being unappreciated and misunderstood.

Well now, are you surprised?

How can business types think of IT any other way than a bunch of maladjusted adolescents spouting acronyms and building Rube Goldberg systems?

In just about any IT shop, computer systems are ramshackle contraptions, over-complicated compared to the simple problems they were designed to solve. Additionally, the staff on the ground often thinks their systems are the summit of innovation and that there is no better solution ever developed. Only a few individuals know parts of the system in-depth and they guard that knowledge with their lives. One can only obtain a full picture of the entire production system by piecing together what the Cabal of Elders is willing to share.

Subsequently, IT spends most of its time chasing its tail rather than innovating and building revenue-generating solutions.

The mantra of IT is, "But we have legacy systems!" Even improving them gradually is considered costly and risky. The cost and risk of rewriting systems, possibly replacing bespoke components with better commercial ones, is automatically assumed to be far greater than the total economic impact of keeping the current systems.

But is it?

The best way to find out is - shocker - to run the numbers. What do system outages cost? What percentage is related to attempts at improvement? How many people are "twirling plates" to keep the system alive? How much additional revenue or cost savings have you generated by actually rolling out system or process changes? And, the big one: what’s your systems’ “bus factor”, meaning how much of the system’s operation is in only one individual’s mind?

If you do this exercise for all of your critical business systems regularly, you'll get a clear picture of where your IT staff is spinning its wheels getting nowhere. It need not be more than a few hours of work if your staff is already collecting the right metrics, for example outage attribution and impact. If you then do some basic analysis of what components can be bought and which have to be rewritten, you get an idea of the rewrite cost.

Anticipate the obvious IT response: "But, dude, it took us years to build this system! How can you say we can rebuild it quickly?!?"

Keep in mind that the "current systems" were not implemented with their full functionality in one exercise; they grew organically, through bouts of discovery and spurts of modifications. Also remember that the actual functionality need not be discovered: it's starting you in the face. In many business systems, a mere 50% of implemented functionality is actually used. (See this article on Standard Life’s internal audit) So rather than just assuming that a rebuild will take the same time as the original development lifetime, take a good hard look at the useful functionality and estimate how much effort that will take to reimplement.

Many CIOs make the mistake of not looking far enough ahead. Engulfed as they are with keeping the current systems stable, they cannot muster the courage to embark on a risky and controversial system overhaul. Consequently, they are caught off-guard when a critical individual leaves or the system starts to crumble under its own weight. By then it is too late and the options are few. When this happens, IT has become the victim of its own creations (the “Frankenstein effect”) and no longer has the option of running its own agenda. It tends to become a drag on the business rather than an asset.

Look at all the business-critical systems and add up how much time is spent fixing problems and keeping them going. Then figure how much production time is lost because of attempted improvements and you will get a good idea of their cost of ownership. To round out your assessment, take a stab at listing the business functions that are actually used and sit with your development managers to estimate how much time it would take to reimplement that.

So rather than have The System Beast keep you hostage, tame it with your most menacing weapon - your calculator – by doing a bit of systems detective work. You'll be pleasantly surprised what you'll find.




About Carl Ververs: Carl has been a business transformer through technology since the start of his career two decades ago. Always at the vanguard of new thinking and creative application of systems, he built CRM systems, used SOA and applied Agile techniques well before they were named.

Carl's technical expertise lies mainly in high-performance computing for derivatives trading and business process management. His background spans a wide spectrum, including business application specialist, hierarchical storage system architect, customer management systems designer, trading operations manager, Agile project Management coach, SOA practice lead, PMO/QA director and deputy CIO. Carl is an avid musician and composer, computer graphics artist and geopolitical pundit. He lives in Chicago with his wife and son.

From The Editor - Welcome to alphaITjournal.com

By Ross Pettit - 2 July 2008

Welcome to alphaITjournal.com, an online magazine focused on the execution, management and governance of IT investments that can yield outsized (or "alpha") returns

Generally speaking, IT is managed to contain costs, not to maximise impact The lion’s share of those costs – maintenance, accounting system implementation, support, licenses, hardware and so forth – are purchased like utilities, no differently than water or electricity However, the high end of IT doesn't lend itself to this type of procurement. It's characterised by continuous problem solving, not rote execution. This distinction is important to make: while it might be a fraction of the spend, it packs the biggest punch. Business impact from IT happens - or doesn't - as a result of how these investments are delivered.

This recognition defines the exclusive focus of this magazine: the principles and practices of execution, management and governance that define IT as a high-value-added business capability, not a low-value-added utility.

Each week, we'll publish one to two new articles of 800 to 1,000 words written by practitioners, for practitioners. No press releases, commercial whitepapers, or academic research. The small article size – about 2 printed pages in length – is intended to make alphaITjournal.com concise, direct and to the point. It also makes the content highly communicative whether it is being read on a traditional PC or a smartphone, in a browser or a RSS newsreader.

Global IT consultancy ThoughtWorks is generously sponsoring our content management platform. In recognition, there will be a clear link to ThoughtWorks content collections. Beyond this, however, there will not be advertising lining every page or embedded with each article. This will allow us to focus exclusively on ideas, not on the commercial aspects of running a for-profit magazine.

I welcome you as a reader. I also hope you'll think about being a contributor of a one-off article, a short series, or a regular column. If you have ideas that you think will resonate with your peers in the top tier, write up an abstract. I'll give it an outlet, and an audience.

I hope you'll enjoy reading alphaITjournal.com as much as we do writing, editing and publishing it.

Best Regards,


Ross Pettit
Editor
alphaITjournal.com