Virtual) cash is king

Cash is king is still undoubtedly true when referring to cash flow, but the time that it referred to holding a wallet full of greenbacks is over. Did you know that less than half of transactions in the US are handled by cash payments and that its share drops to less than a quarter when looking at the value of transactions, instead of the volume?

An even more extreme example is Sweden where an estimated 95% of transactions are completed by debit or credit cards and which strives to be the first cash free society, relying completely on digital currency. Cheques are replaced by online banking and mobile apps and contactless cards are replacing both our physical currency and old magnetic strip cards.

Using a digital currency brings with it a lot of advantages: you won’t need to carry around change with you, (armed) robberies can become a thing of the past and we can use smart software to handle all the administrative tasks that were previously done by an expensive army of accountants. And above all, it is a great thing because computers don’t make mistakes!

Or do they?

The shift towards a cashless society means that your hard earned savings or your corporate profits are essentially reduced to zeroes and ones in a database of your bank’s computer system. Leaving out artificial intelligence for simplicity’s sake, computers are great at performing repetitive tasks. And barring predictable hardware limitations that can cause data corruption, you can count on your computer to perform the same complex algorithms time and time again and always come up with the same result.

Unfortunately, that same computer is not very good at writing, implementing or testing those algorithms itself. We still rely on business analysts to document the correct workflows, on software developers to translate those workflows into code that a computer can understand and on testers to verify that the software always behaves as desired, no matter what edge case we throw at it. Remember that if it does not behave as desired all the rewards for your hard work could go up into (virtual) smoke.

Testing critical software

Because we don’t have infinite time we need to make smart choices about how we spend it. Software testers often use a very simple risk analysis to prioritize our efforts: we estimate how likely it is that a certain part of the application will fail (technical risk) and determine what the impact will be if it fails (business risk). In financial applications the impact of any failure related to calculations or algorithms is almost always critical as such failure can realistically result in a significant monetary loss.

Finding defects starts with clearly specifying how you expect the application to behave: unless everyone is on the same page – the developer will implement their interpretation of what is ‘correct’, the tester will use theirs to report bugs, the end-users may find that the application does not support them in their business processes or even lead to incorrect results: production incidents. While unambiguous specifications often result in stable and reliable products, this does not mean you have to write functional documentation that gives the Encyclopedia Britannica a run for its money for the largest number of pages. It does mean that you have to clearly document your business flows, the formulas for calculations and the sets of business rules that are used. This is the work of a dedicated business analyst.

Every good functional tester will take the acceptance criteria only as a starting point on their journey to discover bugs. In addition to verifying data accuracy, focus should be placed on real-world scenarios that mimic how the application is used by its end users. Emphasis on defining sufficient user scenarios is not enough: a tester is driven by curiosity and by the desire to let the application spew out results that it should not have produced by finding edge cases or testing the limits of input verifications.

Most financial systems are linked with external systems or web services through well-defined protocols. It is very important for a tester to understand the ecosystem in which their application is living. Integration testing with these 3rd party systems can prove to be a challenge of its own, especially if you have no control over the availability and reliability of web services that are consumed by the application. A common approach is to make use of test or development stubs for any external systems during the development and testing phases and to connect to the live systems during the chain integration testing. Do not forget to test how your application behaves when it receives no / incomplete / corrupt data from an external web service.

Data is at the core of a financial software, not only because it’s what the software is dealing with, but also because it’s sensitive. Testers have to find an efficient way to prepare enough reasonable sample data for the testing and anonymize this data, meanwhile making sure data leakage prevention is considered and implemented properly in each process.

And finally, a functional test ends with making a selection of tests that should be added to the long list of automated tests.

Test automation is (the new) king

We can classify defects into two categories:

1. Regression: things that worked as expected before, but are broken in the latest build.

2. New defects: behavior in newly developed functionality that is not in line with what is desired or expected

Unless you’re comfortable with test cycles that last 3-6 months, the only efficient way to catch regression defects is automated testing. Lots of it and as often as possible. Make sure to test against the different layers of your application:

– Presentation layer: simulate how the application is used by end users, ensuring that your UI continues to function as expected. Perfect for end-to-end testing that covers all workflows. As UI testing is typically quite slow, we do not want to test all our variations here.

– Logic tier: tests against the API or web services that your application publishes are much quicker than UI tests. We can increase our data variations and either do end-to-end testing or test individual modules. This is a good place to cover all our edge and, if possible, pipeline cases.

– Data tier: ensure that the data is correctly written to your persistent data storage systems. This can, for example, be done by comparing the database transaction records against the output on the logic tier or presentation layer.

Automated tests require an upfront investment. To get the best return on your investment they have to be run as often as possible, preferably in a continuous integration environment. Also pay ample attention to how the results are presented: especially with complex financial software it is not always clear whether something went wrong, and if it did, what causes the defect. A wrong result in a calculated field could originate from a change made to a field several pages earlier. So make sure that the test analyst has got all the data that he needs to analyze the test results.

At Dilato we have got plenty of experience with testing financial software, in fact: multiple financial institutions rely on the expertise of our test engineers to test their software, every day. We’ve helped reduce regression testing time from months to mere hours by automating all tests and are actively contributing to keeping the excellent reputation of these financial giants intact by ensuring that their staff and customers can work with software that works as designed and without errors.

Are you ready for a cashless world?