Past Events
Robert Kor speaks at the Hong Kong International Computer
Conference 2005
24 November 2005Hong Kong Convention and Exhibition Centre, Wanchai, Hong Kong.
The Time is Now: Shifting Banking IT Mindset From Reactive to Proactive
Full text of Robert Kor's speech delivered on 24 November 2005 at the Hong Kong International Computer Conference 2005.Act one: Introduce yourself
Good afternoon ladies and gentlemen. My name is Robert Kor. I am the M.D. of TechnoSolve Limited. I would love to talk to you about TechnoSolve and our products but you are not here to hear that. I'm here to share our experiences, specifically mine, as a 30 year veteran in the Banking world, particularly in the IT aspects of it.
Let me start by saying this: I am not an IT expert. In fact, what I do have is 20 odd years of experience of having multiple operations departments in banking under me, and the IT department is one of them. What I am giving you today is a culmination of my years of personal experiences and I will relate to you what I have learned.
When I started in the banking industry in the seventies, we had gigantic computers with very little memory and storage. Processing power was so expensive that our programmers/developers had to use it very carefully. Things like four digits for years were cut down to two. I'm sure you all know it – some of you by experience, and some of you because you've read it in "computer history". Anything that can be simplified was shortened and simplified. I was with a large local banking group then. We had this massive mainframe computer that filled up the whole room. We had only 4 Megabyte of Ram running all of the bank's applications.
Computers were a relatively new tool at that time. It was mystical to us in operations. IT, or EDP as it was called at that time, told us in operations what they can or cannot do. We, at operations, were basically at the mercy of what the programmers tell us. On the other hand, we dictated what the EDP department should be building. WE give them the detailed specs and requirements. EDP then builds the application, based on what operations wants and to the timing operations wants. The basic model was OPERATIONS does the business strategizing, IT just supports whatever Operations decides. IT was just a service to be used. IT is not really involved in the business strategies. IT is pretty much PASSIVE.
So it was the last thirty years or so.
Act two: My pain is hopefully your pain / What's wrong?
We've not really changed much today. Although technology itself has gotten faster and better for less money, applications are still built or modified to the frozen specs given by the business. The lifecycle of an application use to be longer. Now, because everything is moving much faster, an application's lifecycle is becoming shorter and shorter. Your frozen requirements today may even be obsolete tomorrow. With this passive method, IT today is forever trying to catch up with the business needs, and never do. It's bound to be unsustainable, or incredibly expensive somewhere down the line.
IT should be business oriented. Most of the time, operations only know how much resource IT needs and do not really drill down to the specifics on how IT is implementing the requirements to see if IT has taken into consideration the bigger business picture, like future strategy and business direction that will affect design and implementation decisions of the project on hand.
Here's another scenario on how this model is not going to be sustainable. Those of you running IT departments know all too well how "fickle" business or operations is on requirements. Classic complains from the IT department is "you said you wanted this, now you want that?????" Couple this with the rapidly changing business climate we have today, and requirements gets more and more "fickle". If IT starts building and making design decisions when the business first gives specific requirements, IT will be forever turning the implementation inside out and upside down because of the myriad of changes that will inevitably happen along the way. Think how expensive all these rework costs you.
We need a change. We cannot continue on in this path. It's just NOT sustainable. The rate of change will not slow down. Our tools are getting better but there are still so many hours in a day and so much money we can spend on one application. Staying competitive and making money is still KING and the applications supporting the business must be able to keep up or the company will lose out.
Because of this IT cannot afford to be passive and reactive anymore. IT cannot wait for operation to spell out the exact specs of things. We must start anticipating the changes in the near, medium and even long term future, or else the cycle of build and rebuild will be so often that the resources needed to sustain the bank will become enormous. Our tools – programming languages, machines, bandwidth, and processing power – all have gotten faster and better. Our methods have not. We are still functioning and building applications like we used to 30 years ago, even though our tools are "more high tech". Something has to change. Everything's already changed. The only thing static is the way we build an application.
Act three: Core Banking
Prelude
I have always hoped that I can use the lessons I learned through the years, coupled with today's latest proven technology to alleviate this unavoidable roadblock. My intent is not to invent a new tool or programming language or even a radically new methodology. The tools used are the same ones available to you. I am not trying to change the approach, but the philosophical mindset on how we build software.
So what is the "breakthrough"? How does IT become more business oriented? How do we build applications that are not reactive but proactive to the requirements and anticipate the changes that will happen?
Here's the thing, and maybe some of you are already doing this. Although the details of the business processes and operations flow changes, there is always some level of consistency in it all throughout the business process' evolution. So instead of implementing for a specific instance of a business process, why not identify the consistencies and generalize the processes, calculations etc., so that the volatile parts are turned into parameters of the generalize process.
Interest Calculation Example
For example, Interest Calculation: There are tons of different variations on how interests is accrued/calculated – compound or not compound interest, should we calculate this before that, etc.
But in the midst of all these different types of formulas, there's one constant. All the interests are basically calculated like this:
I = P * r * t
Where I is the interest, P is the principle amount, r is the rate and t is the duration or time.
That's it. All the variations will be in what P is, depending on the currency or the amount. Or what r is, if it's floating or fixed. And the time t – how long is the loan or time deposit kept or accrued for how long. Etc etc…
So now we have the constant, the basic interest calculation formula I=P*r*t. We've identified that. We know no matter how fancy interest calculation can become, this formula will not change. We also know that the factors or things that will change here are the "parameters" P, r and t, and how each of them affect the other.
So instead of building a plain old program, or to use a more technical term function or method, that takes in P, r and t and spits out I, we also add on the way P, r and t can potentially influence each other and how the interest is calculated.
This means, we shouldn't even be writing up one program for each product with different interest calculation functions calling the basic formula. We should, instead, write up something that determines what the P, r and t are for different products based on some sort of parameters. That makes it even more generic and reusable.
And as simple as that we have shielded the code from the constant change of the business environment, at the same time we are able to use this piece of code over and over again in different parts of our application. In fact, technical concerns like rounding error consistencies are already standardized throughout every part of the application that needs interest calculation. There is no need for programmers to remember to code the interest calculation function of a new product in a certain way. This minimizes inconsistencies, potential for errors, and saves time in the end. Yes, we did have to spend more time in the beginning to get the first production version that uses interest calculation up and running. But I think making the initial "sacrifice" means that subsequent products that will need the interest calculation function will enjoy the benefits of that sacrifice. Future modifications are centralized. You only need to do that change in one place, and that one change will benefit all the other products that are using this function. The extent of the manpower used to modify and test the changes can be minimized.
Of course, this model or way of building an application takes a lot of discipline and willpower from management of both the operations or business side and the IT side. Like I said, IT has to become more business oriented and it has to do it from the IT manager down to the individual programmers.
The key is the business nature core is not affected (at least not that much) by the external changes. The pattern is the same no matter what fancy calculations or variations or derivatives are put on it. So why not save the repeated effort of doing one implementation for each variation and put a bit more thought and effort in the beginning to "generalize" calculations, processes, flows, and sit back and enjoy the benefits of not having to recreate the wheel every time there's an addition or change.
Transaction Override Workflow Example
You can generalize calculations, like the example we had before. You can generalize workflows. For example, a lot of the banking transactions happen like this:
Enter the information. Then press Submit. If something, most probably amount or authority, is not enough, you'll need somebody else to approve or reject the transaction. If the authority is still not enough, another person approves and reject the transaction until transaction is all approved.
This flow doesn't change. And like before, the factors that changes are the field(s) that identifies if the submitter is authorized to submit the transaction or not and the threshold for each submitter and/or approver.
So instead of building in this flow in the logic, why not take it out and build an "engine" to process it. This leaves the programmers who are working on the specific details of the transaction to worry about the details of the transaction, instead of the whole override/approval flow. Again, like the interest calculation example, pulling this feature/function out also gives you the "side-effect" of having standardized the whole override/approval function and decoupling it from the nitty-gritty details of the transaction processing.
Three Layers and Time-to-Market
Similarly, your user interface and data access can be decoupled from your business logic. A lot of the current software development ideology already preached that – the three layer approach, UI, Business Logic, and DB. The interest example and the override workflow example we had before only deals with the Business Logic layer.
The exact same philosophy we use for the Business Logic layer can also apply to how we engineer our application's UI layer and Database layer. If the UI layer is sufficiently decoupled from the business logic, meaning the programmers writing business logic programs do not need to know how the details of rendering something on a specific user interface works, we can basically turn the UI layer into an engine. We can add on support for different delivery channels onto the UI Engine without turning the Business Logic inside out. You are not just getting the decoupling from this. The side effect is that you get to standardize and generalize some of the frequently used UI functionality that is specific to Banking applications into the UI Engine.
The same applies with your database layer. Instead of having the business logic modules programmers go down to the details of each different types of database, we decouple or shield those variations from them, creating a DB Engine in between the actual database and the business logic in the process. The programmers will still need to write code to do database access. But the code will not be affected by a change of database used underneath the business logic. The specifics on how to access each type of database will be written and hidden inside the DB Engine. If we decide, in the future, that we need to support different types of database, we can put our resources into refining and enhancing the DB Engine, instead of having modify all the points in the code where there is database accessed.
In this way, we are elevating the UI and DB layers, like we did to the business layer, into a more banking domain centric model. Think how powerful this will be, especially to the bank's time-to-market capabilities.
Tying Things Up for more Time-To-Market enhancement
So you've generalized everything and they are now in generic building block formats. We still need to tie them together. This is where the fourth dimension, WORKFLOW, comes in.
Like I said before, everything we talked about here is nothing new. I'm sure you have heard of the various workflow software available in the market. I'm not saying you have to build your own Workflow engine. You can always buy one. But like the UI Engine and DB Engine examples, you can always "specialized" the workflow engine to the banking application needs, much like our example before with the override/approval workflow.
Once you have the whole system generalized into bite size blocks, you can now pick and choose and tweak the parameter settings of these little blocks, and string them together in a workflow. Now this is where the fun part comes in. In order to be the most "business oriented", you must be able to easily string together a workflow, with your existing blocks, to support a new business operation. Again, bringing back the override example, you might have a transaction that never needed overriding but now because of certain regulatory or business environment change, override is needed. You have a finite amount of time. It's unwise to go in and turn the business logic inside out. Since we've already compartmentalize everything, you can keep your core business logic for that transaction, and string in the override workflow functionality at where it should be. What if you need to add on MIS update after the transaction for some seasonal report? Or maybe even add certain constraints checking before the start of the transaction? You can see now how this set up can help enhance Time-to-Market.
We are presenting this philosophy to you not as an absolute fix to the current state of the banking IT industry, but just as another option to think about. Of course, the generalization you do today will not always work. We are bound to miss some factors somewhere that will pop up in the future, and screw up our "engine". There will definitely be factors that you've not taken into consideration at that moment in that that will come up in future requirements. But think of this: Instead of going to N number of places to change things, you can just go to your "engine" and enhance it with that new factor or requirement. Depending on what the change is, you may not even have to go back to each and every instance of interest calculation called and modify it to fit your new requirement. And the change will be instantaneously enjoyed by every part of your application that's calling or using the interest calculation. The change is also centralize, which makes software development management easier.
Our goal is real business level "reuse". Not just on the technical "methods" or "function" level. The savings are global. IT becomes more and more business oriented as it tries to anticipate the changes that will happen in business and builds it into the application way before the changes happens. It is certainly impossible to do so in the past decades but thanks to Moore's Law, it's much more feasible today.
Act four: Basel II
We talked about the changing business landscape of the banking environment and how it affects how IT supports the banking operations – increase sophistication of customer requirements, faster turnaround time, fiercer competition, and the technical advancements. The last decade or so have seen the increased globalization, the Asian Financial Crisis, among other things. Banks are more interconnected with each other. An event happening in one side of the globe may potentially affect a bank on the other side of the globe. We as banks, now not only have to ensure our operation runs smoothly and at a profit, we also now need to focus on the RISK we expose ourselves to because of this increasingly globalize environment. It is because of this that the new Basel accord was introduced to better protect the banks against another financial crisis.
So as banks are shifting from the management of their operations to the management of risks, IT as the "Business Oriented" service provider must also be proactively building or modifying their applications to support this risk management approach.
I don't need to go through the details of Basel II. As you all know, the information or data needed to come up with the risk assessment of the bank is not as straightforward as the old Basel accord. I don't have to tell you that a lot of banks are turning their databases and applications inside out and spending a lot of money implementing data warehousing and data extraction applications just to be able to get the information needed to comply with the Basel II requirements. And also trying to get the maths people to understand the programmers and vice versa.
The challenge here is how to be proactively building applications with RISK management in mind. Also, once you have your applications generalized to shield against operations and business change, how do you add on to your application to cater to the Risk management requirements without affecting all the works you have already done before. How can you seamlessly integrate everything into one working application? These are some of the questions you may want to take home and contemplate on.
Act five: conclusion
Our conclusion…
Recap.
What we propose, shifting from reactive to proactive IT, is nothing groundbreaking. I think it is pretty logical. I'm sure some of you have contemplated on the problems and the solutions we have talked about here. It is basically learning from our old pains and not be blinded by the fancy new technology available. And making your applications or IT solutions more proactive by anticipating the changes the business or operations will go through.
The Y2K situation was in a way lucky because everybody was going through the same problem. Everybody was freezing everything at the same time, so competition slowed down a bit during Y2K. But now we don't have that luxury. Your competitors will not wait for you to finish your changes before rolling out a new service. You will not have the luxury of freezing everything and fixing it. Your application will have to be agile enough to cope with the changes and still enable your bank to remain competitive. We can't continue to have IT trying to keep up with the needs of business and regulatory environment. Like I said before, it's not sustainable. It's also not the best use of resources.
For those who like their buzzwords, I'd like to repeat, what we are proposing is not revolutionary. The way we try to learn from the past pains and inefficiencies is exactly like the standards in CMM Level 5, or even in Six Sigma and TQM.
Learn from the past. Run a "smarter" IT ship by moving from reacting to requirements to being proactively anticipating what the business will need. So when the need comes, the bank can react quickly – IT giving operations the applications to support it quickly.
Act six: Self-Promotion
I've always wanted an opportunity to change how things are being done but never had the opportunity to do it. When the opportunity presented itself, three years ago, it was a chance to start anew. I've founded TechnoSolve based on this vision and philosophy I described here. TechnoSolve is our attempt to revolutionize the way we develop software applications.
At TechnoSolve, we try to build all our applications, from our core banking modules to our risk management modules this way, using this principle of proactively separating the volatile from the non-volatile and implementing our applications in the way I described earlier. Although we are a software vendor, that doesn't mean that you, as an in-house shop cannot use this philosophy or principle when running your IT projects and strategies.
We currently have the whole core banking system ready. We have customers in both Hong Kong and Macau using our core banking system. If you want to learn more about our products, I and my colleagues standing there will be more than happy to walk you through our products and answer any questions. You can also opt to email or call us.
Thank you and have a nice day.
Accompanying Powerpoint presentation