The newly published OWASP Top 10 2017 Release Candidate introduces a new application security risk –protection of APIs.
It’s not a secret that managing information security is becoming more complex. It is also no secret that there are more threats and more solutions to stay on top of. While it makes me wonder if we are reaching the limit of the capabilities of the human mind when it gets to efficient information analysis for proper decision-making, I am quite certain we can agree that as far as information security professionals go, we are definitely getting to that point, subject to day-to-day constraints.
In the ever-evolving search for the more precise, more efficient operation, we break up the information units (whether these are consuming, storing or processing units) to perform more specific activities.
This way, many services today run thanks to a set of functions that are consumed individually and use APIs to synchronize. This way, business operations are delivered more efficiently, and are being put together much faster as the flexibility and scalability of API utilization make the architecture simpler.
However, as many APIs are mission critical and involve major functionalities and business processes, APIs introduce a wide range of risks and vulnerabilities. The combination of growing adoption and security risks was the major driver for the inclusion of under-protected APIs to the 2017 OWASP top 10 list.
So on one hand, we have a great advantage to the modern DevOps environment that is constantly under the pressure of continuous delivery (to maintain the service optimally at all times), and on the other hand a security challenge. You can’t avoid APIs – to be as elastic and resilient as possible, there is no choice but a complete-process automation starting with provisioning and management, to platform management applications, to the staging and production environment where the service runs.
Rapid FaaS evolution is driving API adoption
When functions are consumed to form an operational unit that is a set of function containers rather than a web server, we are talking on a new architecture which is being referred to as “Serverless.” These functions are actually APIs exposed for the client side application, which may invoke these APIs upon relevant client side event. That yields in a greater complexity compared to using virtual machines, because the containers are on a ‘need’ basis – they are created and demolished to perform a certain role when required only, and then go extinct, allowing efficiency and cost savings.
The API Security Challenge
It also grounds for trouble. Traditional application security assessment tools used for scanning and testing will not invoke the API because they cannot generate the request properly or provide the right data in many cases (even if it knows whether to use JSON or XML).
In a typical API, third-party frameworks and libraries use custom methods to read a JSON or XML document from the HTTP request and pass it over to the API code for handling. These methods are subject to constant changes, limiting the success rate of such tools.
The risks within APIs are not essentially different from any other application, and can be categorized by various methods of authentication, injections, misconfigurations and data encryption.
The focus should be on the integrity and confidentiality of data in transit but not solely that. Attackers can violate access permissions and hijack the session to eventually trick an API (or the service that consumes it) to perform a series of unwanted actions like leaking sensitive data, for instance. Protocol evasion techniques (HTTP Null byte, TCP packet reordering and more) or manipulations of the JSON/XML parameters (like large element values), as well as bot activity (brute force, scraping or even DDoS) against the API are major concerns to pay attention to.
When using a Web Application Firewall, it is recommended to see if it implies the same security on APIs as well as it does on regular web applications. Access restrictions, XML/JSON schema validation, limiting identical attempts, guarding data and keys, message size policy etc. should be implemented and monitored.
Generally, combining a positive and negative security model will provide a robust protection to the API infrastructure. Knowing the normal behavior of the API as well as bad behavior patterns will lead to a more secured environment with an extremely low rate of false positives, thus facilitating continuous delivery.