Our individual and collective digital footprints are expanding daily. And with global investment in digital transformation expected to almost double in the next 3-years, from USD 1.8 trillion to USD 2.8 trillion, this expansion is set to accelerate (Statista).
While there are great cost, efficiency and revenue benefits attached to the digitisation of business processes, there is also significant risk associated with the storage and use of captured data. In addition to operational and reputational risk, the introduction of a raft of regulatory requirements over the past decade adds compliance and regulatory risks to the mix.
Failure to comply with regulations can come with a very high price tag. For example, a severe fine under the General Data Protection Rules can result in a EUR 20 million payout, or 4% of total global annual turnover for the previous year, whichever is the higher.
And, in certain circumstances, where it can be shown that risks have not been adequately mitigated, directors and board members can be held personally liable in the event of a data breach or data protection failure.
APIs often carry very sensitive data: consider the information exchanged with banks, credit card companies and healthcare services. Leakage or misuse of this data carries very high risks.
Now consider that Gartner, in its How to Build an Effective API Security Strategy report, predicts that “by 2022, API abuses will be the most-frequent attack vector resulting in data breaches for enterprise web applications.” The need for security and continuous testing across the API development and delivery cycle would seem quite clear cut.
There are two types of data to consider:
Data at rest: data that is housed on computer data storage in any digital form, and Data in transit: data enroute between source and destination, typically on a computer network.
Companies must adopt stringent approaches to secure their ‘data at rest’ and ‘data in transit’ to ensure that a) is it not made available to ‘bad actors’ and b) it is not used for purposes other than what it was intended by the owner of the data.
Let’s look at some of the key areas of API security to be addressed (non-exhaustive):
Use Synthetic Data A very simple rule of thumb that every company should adopt is to never expose live production data during testing (sounds obvious I know but you would be surprised). While some companies use “masking” to obscure data, this is inadequate and inadvisable. The use of synthetic data is generally acknowledged as the preferred option for testing and is also 100% GDPR (or other data protection framework) compliant.
Automated and continuous testing is a must to ensure that applications using APIs perform as desired. Every change that is made must be fully tested along with any process it impacts to avoid any unintended downstream consequences. The only way to achieve this is a fully automated testing strategy that is not subject to human intervention and thereby human error.
Use Stateful Testing Stubbing (faking) of API calls to downstream systems is often used in automated testing. While stubbing can facilitate testing, it does not fully test an application’s use of an API and opens the potential for data loss or leakage. Stateful testing is required to ensure that the application under test is properly using an IT process (as most processes consist of more than one API call to complete a business process).
Negative Testing and Edge Cases Most software engineers test environments the way they expect a user to use it. However, a lot of bugs are caused by scenarios most people don’t think to check. With negative testing, you are asserting the application accommodates unexpected user behaviours. You’re proactively exploring these edge cases to optimise the resilience of your APIs in production. While negative testing is often ignored, doing so is ignoring 80% of your code base - which is quite frankly a mistake.
Given that ‘data in transit’ will at some point travel over one or more public networks, encryption of data is not just a nice to have, it is essential. Without encryption essentially any one on the Internet can listen to the data passing by and patch together valuable information. Current encryption standards are extremely effective in protecting data such that public networks can be safely used once the data being transmitted is properly encrypted.