As 2016 draws towards its close, APIs (application program interfaces) have emerged as serious revenue-generating tools. In a study comprising of 300 IT companies, it was found that more than 80% of the firms generated annual revenues of $5 million or more, only from APIs. In general too, revenue-generation is viewed as one of the most important value-propositions of these interfaces. That, in turn, had prompted nearly 73% of the respondents (from the survey) to frame and implement detailed API strategies. However, all the hard work and available resources behind API development can go to waste, if the software is not tested properly. As is the case for any web or mobile application, careful testing is critical for APIs. Over here, we have listed out a few common API testing mistakes you should steer well clear of:
API testing as a standalone activity
An API is supposed to work with, and add value to, the existing IT setup of businesses. Hence, testing it in a vacuum – without considering how the interface would work in the ecosystem – would be fairly useless. API providers have to keep in mind that all API errors are not generated by the same causes, and the environment in which it is implemented has a role to play. Ideally, it has to be ensured that all the stakeholders (the teams that are related to the concerned API) receive notifications regarding the results of the test. That will give a clear idea about how well, or otherwise, the new API sits in the overall workflow of the organization. Testing an API all on your own is likely to give a myopic, misleading picture.
Conducting only GUI testing
GUI (Graphical User Interface) testing is an essential part of API testing – but it is far from being all that you have to check. Typically, the GUI testing tools do not show whether the API can be smoothly integrated within a mobile app or any other target software. According to professional API developers, GUI testing makes up only around 10% of the overall API testing stage. An interface or an application that is tested only from the GUI perspective might have serious functionality and/or integration issues. When you test APIs, do it thoroughly.
Considering app performance as proof of API quality
Let us now look into matters from the opposite angle. It might well be that an API can be easily integrated in the backend server of a mobile app, ensuring that the latter works properly. However, this does not rule out the possibilities of basic bugs being present in the underlying API. Do not make the folly of taking the quality factor of an API to be granted, simply because the end-user application powered by it is working fine. Errors might emerge at a later stage, causing app crashes, and forcing developers to check every element of the call chain for errors. Make your API bug-free to start with, and minimize probable performance risks in future.
Not checking for scalability
Over time, the user-base of an API is likely to increase. As the frequency of network requests increase (even if you have implemented ‘rate limits’), the API needs to scale upwards. If scalability checking does not feature in your API testing routine, the chances of ‘user overloading’ remains, along with financial and legal problems. Make sure that your API is properly scalable, and establish an API governance mechanism (not particularly important at the start, but essential as the number of users grow). Leave alone APIs – any web-based process that is not scalable is likely to fail sooner rather than later.
Testing separate methods in isolation
Many new API developers make this mistake. They set up tests to check whether an interface works well with different methods (say, checking inventory and adding things to shopping cart, with GET and POST) in isolation. While these tests might give positive results, the API might fail when it is used across multiple methods – as it is likely to be used by the final users. A responsible API provider needs to check and double-check whether a new API retains its desired functionality while working with a combination of methods. Once again, it’s all about checking the entire workflow of the API – and not its elements separately.
API testing is a one-shot activity
Far from it. Even after a web or mobile application is published, its underlying APIs have to be tested. If testing is discontinued after the final version of the app is released, there might emerge compatibility and/or functional issues – when the app is ported on newer devices and operating systems. What’s more, making changes in the API randomly can have serious adverse effects on the API customers. A significant percentage of app performance issues can be traced back to problems in its APIs (with the app itself not being buggy). Keep testing APIs even after it has been integrated in an app and the latter has been launched. That’s the best way for ensuring the longevity of both the app and the API.
Being liberal with SDKs and DevKits
On first glance, trying to boost the integrability of APIs with the help of software development kits (SDKs) and libraries – for separate programming languages – seems a perfectly fine strategy. Closer inspection, however, reveals two problems. Firstly, it is practically impossible to cover all the coding languages used to make APIs (PHP, .NET, Node.js, Delphi, Java, etc.) – and there are likely to be mismatches between your SDKs and the coding system a company (the API customer) follows. In addition, the greater the number of SDK libraries, the greater is the accountability of the API provider. In case of any glitches, the developer has to actually delve into third-party company codes – to detect and remove problems. At the time of testing, determine how many SDKs and DevKits it will have. Instead of adding too many of them, make sure that the API itself is top-notch.
Lack of anticipation about threats and attacks
We have already highlighted how a lack of foresight about scalability can hurt an API. A good API provider needs to be well-versed with common software attack methods like ‘shell scripts’, ‘SQL injection’, social engineering attacks, and ‘XML bombs’. What’s more, rookie mistakes in client-side coding can lead to DDoS (Denial of Service) attacks – a result of unexpectedly high request volumes. Attacks on digital interfaces can also be initiated by passing certain XML/JSON data in the form of attachments. Unless an API architect can anticipate these attacks and add security layers to counter them – the testing procedure will be mostly fruitless.
Note: Fake network requests can also be generated (in large volumes) by agents out to deliberately harm your API. Rate limits should keep that risk at an arm’s length. API spoofing has also emerged as an attack technique that you should be wary of.
Ignoring infrequent glitches
An API should work properly ALL THE TIME and not MOST OF THE TIME. It is often a ‘convenient’ option for API developers to ignore the problems that occur only intermittently. A commonly used argument justifying this is that, the entire traffic log of the API has to be scanned – to find out the root causes of such glitches. What these developers do not realize is that, these ‘small issues’ might grow into full-blown problems for the end-users. No usability-related problem, however infrequent it might be, should be glossed over. These problems can become more frequent later on, and debugging the interface then will be a lot more difficult.
Not doing regression testing
This mistake generally stems from the misplaced belief that unless an API is changed, it will continue to function in the same manner in an app. With use cases of applications growing more and more varied, and agile development methods being increasingly adopted – the logic and core functions of apps often need to expand. APIs, on their part, have to be revalidated, to keep supporting the platforms/apps as before. Unless the API integrity is constantly tested through regression testing suites, they might not remain compatible with the fast-evolving applications they had been built for.
Failure to check API dependencies
Application program interfaces are created to provide seamless backend web connectivity for apps (server-side). However, the API you create often has to depend on certain partner codes/software, to deliver the desired functionality. Ignoring such dependencies during API testing can be a serious mistake. You need to find out whether your code works a) consistently and b) reliably, with all the other software/services/resources it is dependent on. Think beyond testing only your APIs – you need to check the entire ecosystem where it will be implemented.
Note: Problems in partner services can cause your API to malfunction, precisely due to this dependency factor. Before launching your API, plug in some dependencies in it – and find out whether it still functions properly or not.
12. Over-reliance on manual testing
This makes things slower, more stressful – and most importantly, can lead to several bugs and errors remaining undetected. There are several reliable automated tools currently available for API testing, and developers need to use them extensively. Once the coding is done, it can be imported into Swagger, and API tests can be directly created from the files and other definitions in your code (a tool like Runscope makes this easy). Right from creation and modification of detailed API tests, to test scheduling – all the important tasks can be performed by tools. Manual testing is still important, but relying on it exclusively is a bad idea.
API developers also need to specify a service contract for their interfaces – a standardized descriptor that would enable consistent deployment and usage. Ignoring inordinately long delays in API request responses (10-12 seconds or more) is also a mistake, and including response time assertions is a smart way of tackling this issue. Instead of treating APIs as single interaction platforms (Request/Response), consider them as complete applications. Test your APIs well – and make sure that you do not make any of the above mistakes!