Cutting the wrong corners can have a devastating effect, as demonstrated earlier this month by what happened at the TSB. Brand reputation can be destroyed overnight by the power of the media.
One BBC news headline dubbed the recent problems at TSB as “The TSB Computer Fiasco”, whilst the Guardian Newspaper reported that in some quarters “TSB stands for Totally Shambolic Bank” adding in a separate article that “Several readers have told us that they intend to close their accounts, suggesting TSB will suffer long-term damage from the IT fiasco”. In an article called “TSB under growing pressure as MPs and tech experts accuse bank of rushing botched IT upgrade”, the Telegraph reported that “MPs accused the bank of cutting corners on the IT upgrade”.
But it’s not actually the media that damaged TSB’s reputation, is it? TSB seem to have done that for themselves, the media just made sure we all got to hear about it.
So what went wrong, and what can we learn from it?
What went wrong?
The bottom line is that they failed their customers because their IT systems failed after a major upgrade. A lot has been written about the failure of project planning and lack of robust testing. I am sure that is all true, but I feel that none of us can just sit back and judge: there should be no complacency from either within the world of Financial Services or from technology suppliers.
The devil is always in the detail
A tweet from a blogger caught my eye as the drama unfolded. He’d been using his browser to inspect the source code of the TSB online banking login page. He found a variety of issues and published screenshots of them in the thread. He attracted quite a few replies, many from others contributing other issues they’d found once they started digging.
This tweet started me thinking more generically. It posed a couple of questions in my mind:
- First, how many websites would emerge well from such a look-under-the-bonnet?
- Second, how well do people really understand the separation between a website’s presentation layer and the systems that sit behind it?
A quick answer to the first question affirms my comment above that none of us can afford to be complacent: very few websites come away squeaky-clean from such an under-the-bonnet inspection.
Even websites that continue to operate acceptably can exhibit internal warnings and errors. These can be issues such as formatting rules not being available as originally intended.
Technology is amazing, but it’s also frustrating, it keeps moving and evolving – it’s the nature of the beast. We need to employ the best people and processes to ensure that those warnings and minor issues don’t evolve into show-stoppers that a customer experiences. We need to test, test and test again and never accept the mediocre.
The second question is much more interesting. It seems that most people, as users, tend not to distinguish between a front-end website running on their device and the engine running on a server somewhere.
This separation leads to, for example, many organisations applying a modern presentation-layer over their existing systems. This is so they can offer digital-solutions over what is now dubbed legacy-technology. Typically, this legacy-technology isn’t web-enabled, it may look bland and boring, but it very rarely failed.
This is sometimes thought of as something of a “sticking-plaster” approach, but there is nothing inherently wrong with it as a business strategy. Indeed, it makes good commercial sense. Modernising a complex legacy solution can be a huge and risky project, so a front-end-first approach reduces both cost and risk. This strategy focuses on modernizing a single portion of the application at a time. It can provide the way to deliver immediate value to the end-users without the risk of larger structural changes, but only if it is done correctly.
OK, this may seem like absolute common sense. Who would implement a project and not do it correctly? Well… there are probably hundreds of examples that managed to avoid the headlines, but we all know of instances over the years where a sticking-plaster enhancement has been applied without any of the rigour and discipline with which the legacy part of the system was originally developed and deployed.
Modern technology providers boast about speed-to-market, agile development, easy-to-implement …we do so ourselves. Those are all laudable attributes in our world of wanting everything yesterday, where customer requirements are constantly evolving, and competition pushes the boundaries further. But, in all the excitement, it often seems that it is easy for some to overlook one of the reasons why an existing legacy system worked so well without error for so long.
So what can we learn?
The lesson I take away from all this is that we still need the old attitudes to underpin our brave new world of “fast, easy and shiny”.
This does not mean that we need to go back to the old methods and processes themselves… there are new, often better, methods and processes designed for new technologies.
Instead, it means that we need to keep hold of the old attitudes that lead to the discipline of setting up and enforcing appropriate processes for planning, management, implementation, and testing.
We’re all familiar with the 80-20 rule in business: the idea that the last 20% of the effort just isn’t required for a successful conclusion to a project and a satisfied customer. The key is in deciding which 20% is surplus to requirements.
Most of us tend to view “cutting corners” as a pejorative term. I’m sure that the “cutting corners on the IT upgrade” mentioned earlier wasn’t meant as any sort of praise or commendation.
We’re all looking to cut costs, to increase efficiency, to make savings. We’re all looking for that 20% to leave out. But we all need to remember this: if you cut the wrong corners, sooner or later you will be found out.