As organizations break their applications down into microservices, the responsibility for securing these environments is shifting as well.
Modern applications are difficult to secure. Whether they are web or mobile, custom developed or SaaS-based, applications are now scattered across different platforms and frameworks. To accelerate service development and business operations, applications rely on third-party resources that they interact with via APIs, well-orchestrated by state-of-the-art automation and synchronization tools. As a result, the attack surface becomes greater as there are more blind spots – higher exposure to risk.
Applications, as well as APIs, must be protected against an expanding variety of attack methods and sources and must be able to make educated decisions in real time to mitigate automated attacks. Moreover, applications constantly change, and security policies must adopt just as fast. Otherwise, businesses face increased manual labor and operational costs, in addition to a weaker security posture.
The WAF Ten Commandments
The OWASP Top 10 list serves as an industry benchmark for the application security community, and provides a starting point for ensuring protection from the most common and virulent threats, application misconfigurations that can lead to vulnerabilities, and detection tactics and mitigations. It also defines the basic capabilities required from a Web Application Firewall in order to protect against common attacks targeting web applications like injections, cross-site scripting, CSRF, session hijacking, etc. There are numerous ways to exploit these vulnerabilities, and WAFs must be tested for security effectiveness.
However, vulnerability protection is just the basics. Advanced threats force application security solutions to do more.
Challenge 1: Bot Management
52% of internet traffic is bot generated, half of which is attributed to “bad” bots. Unfortunately, 79% of organizations can’t make a clear distinction between good and bad bots. The impact is felt across all business arms as bad bots take over user accounts and payment information, scrape confidential data, hold up inventory and skew marketing metrics, thus leading to wrong decisions. Sophisticated bots mimic human behavior and easily bypass CAPTCHA or other challenges. Distributed bots render IP-based and even device fingerprinting based protection ineffective. Defenders must level up the game.
Challenge 2: Securing APIs
Machine-to-machine communications, integrated IoTs, event driven functions and many other use cases leverage APIs as the glue for agility. Many applications gather information and data from services with which they interact via APIs. Threats to API vulnerabilities include injections, protocol attacks, parameter manipulations, invalidated redirects and bot attacks. Businesses tend to grant access to sensitive data, without inspecting nor protect APIs to detect cyberattacks. Don’t be one of them.
Challenge 3: Denial of Service
Different forms of application-layer DoS attacks are still very effective at bringing application services down. This includes HTTP/S floods, low and slow attacks (Slowloris, LOIC, Torshammer), dynamic IP attacks, buffer overflow, Brute Force attacks and more. Driven by IoT botnets, application-layer attacks have become the preferred DDoS attack vector. Even the greatest application protection is worthless if the service itself can be knocked down.
Challenge 4: Continuous Security
For modern DevOps, agility is valued at the expense of security. Development and roll-out methodologies, such as continuous delivery, mean applications are continuously modified. It is extremely difficult to maintain a valid security policy to safeguard sensitive data in dynamic conditions without creating a high number of false positives. This task has gone way beyond humans, as the error rate and additional costs they impose are enormous. Organizations need machine-learning based solutions that map application resources, analyze possible threats, create and optimize security policies in real time.
Protecting All Applications
It’s critical that your solution protects applications on all platforms, against all attacks, through all the channels and at all times. Here’s how:
- Application security solutions must encompass web and mobile apps, as well as APIs.
- Bot Management solutions need to overcome the most sophisticated bot attacks.
- Mitigating DDoS attacks is an essential and integrated part of application security solutions.
- A future-proof solution must protect containerized applications, serverless functions, and integrate with automation, provisioning and orchestration tools.
- To keep up with continuous application delivery, security protections must adapt in real time.
- A fully managed service should be considered to remove complexity and minimize resources.
In 2018, organizations reported a 10% increase in malware and bot attacks. Considering the pervasiveness (70%) of these types of attacks reported in 2017, this uptick is likely having a big impact on organizations globally. Compounding the issue is the fact that the majority of bots are actually leveraged for good intentions, not malicious ones. As a result, it is becoming increasingly difficult for organizations to identify the difference between the two, according to Radware’s Web Application Security
in a Digitally Connected World report.
Bots are automated programs that run independently to perform a series of specific tasks, for example, collecting data. Sophisticated bots can handle complicated interactive situations. More advanced programs feature self-learning capabilities that can address automated threats against traditional security models.
Positive Impact: Business Acceleration
Automated software applications can streamline processes and positively impact overall business performance. They replace tedious human tasks and speed up processes that depend on large volumes of information, thus contributing to overall business efficiency and agility.
Good bots include:
- Crawlers — are used by search engines and contribute to SEO and SEM efforts
- Chatbots — automate and extend customer service and first response
- Fetchers — collect data from multiple locations (for instance, live sporting events)
- Pricers — compare pricing information from different services
- Traders — are used in commercial systems to find the best quote or rate for a transaction
Negative Impact: Security Risks
The Open Web Application Security Project (OWASP) lists 21 automated threats to applications that can be grouped together by business impacts:
- Scraping and Data Theft — Bots try to access restricted areas in web applications to get a hold of sensitive data such as access credentials, payment information and intellectual property. One method of collecting such information is called web scraping. A common example for a web-scraping attack is against e-commerce sites where bots quickly hold or even fully clear the inventory.
- Performance — Bots can impact the availability of a website, bringing it to a complete or partial denial-of-service state. The consumption of resources such as bandwidth or server CPU immediately leads to a deterioration in the customer experience, lower conversions and a bad image. Attacks can be large and volumetric (DDoS) or not (low and slow, buffer overflow).
- Poisoning Analytics — When a significant portion of a website’s visitors are fictitious, expect biased figures such as fraudulent links. Compounding this issue is the fact that third-party tools designed to monitor website traffic often have difficulty filtering bot traffic.
- Fraud and Account Takeover — With access to leaked databases such as Yahoo and LinkedIn, hackers use bots to run through usernames and passwords to gain access to accounts. Then they can access restricted files, inject scripts or make unauthorized transactions.
- Spammers and Malware Downloaders — Malicious bots constantly target mobile and web applications. Using sophisticated techniques like spoofing their IPs, mimicking user behavior (keystrokes, mouse movements), abusing open-source tools (PhantomJS) and headless browsers, bots bypass CAPTCHA, challenges and other security heuristics.
Blocking Automated Threats
Gawky bot attacks against websites are easy to block by IP and reputation-based signatures and rules. However, because of the increase in sophistication and frequency of attacks, it is important to be able to uniquely identify the attacking machine. This process is referred to as device fingerprinting. The process should be IP agnostic and yet unique enough to be confident to act upon. At times, resourceful attacking sources may actively try to manipulate the fingerprint extracted from the web tool, so it should also be client-side manipulation proof.
Web client fingerprint technology introduces significant value in the context of automated attacks, such as web scraping; Brute Force and advanced availability threats, such as HTTP Dynamic Flood; and low and slow attacks, where the correlation across multiple sessions is essential for proper detection and mitigation.
For each fingerprint-based, uniquely identified source, a historical track record is stored with all security violations, activity records and application session flows. Each abnormal behavior is registered and scored. Violation examples include SQL injection, suspicious session flow and high page access rate. Once a threshold is reached, the source with the marked fingerprint will not be allowed to access the secured application.
Taking the Good with the Bad
Ultimately, understanding and managing bots isn’t about crafting a strategy driven by a perceived negative attitude toward bots because, as we’ve explained, bots serve many useful purposes for propelling the business forward. Rather, it’s about equipping your organization to act as a digital detective to mitigate malicious traffic without adversely impacting legitimate traffic.
Organizations need to embrace technological advancements that yield better business performance while integrating the necessary security measures to guard their customer data and experience.
If you’re looking to find a date in 2019, you’re in luck. Dozens of apps and sites exist for this sole purpose – Bumble, Tinder, OKCupid, Match, to name a few. Your next partner could be just a swipe away! But that’s not all; your personal data is likewise a swipe or click away from falling into the hands of cyber criminals (or other creeps).
Online dating, while certainly more popular and acceptable now than it was a decade ago, can be risky. There are top-of-mind risks—does s/he look like their photo? Could this person be a predator?—as well as less prominent (albeit equally important) concerns surrounding data privacy. What, if anything, do your dating apps and sites do to protect your personal data? How hackable are these apps, is there an API where 3rd parties (or hackers) can access your information, and what does that mean for your safety?
Privacy? What Privacy?
A cursory glance at popular dating apps’ privacy policies aren’t exactly comforting. For example, Tinder states, “you should not expect that your personal information, chats, or other communications will always remain secure.” Bumble isn’t much better (“We cannot guarantee the security of your personal data while it is being transmitted to our site and any transmission is at your own risk”) and neither is OKCupid (“As with all technology companies, although we take steps to secure your information, we do not promise, and you should not expect, that your personal information will always remain secure”).
Granted, these are just a few examples, but they paint a concerning picture. These apps and sites house massive amounts of sensitive data—names, locations, birth dates, email addresses, personal interests, and even health statuses—and don’t accept liability for security breaches.
If you’re thinking, “these types of hacks or lapses in privacy aren’t common, there’s no need to panic,” you’re sadly mistaken.
The fact is, dating sites and apps have a history of being hacked. In 2015, Ashley Madison, a site for “affairs and discreet married dating,” was notoriously hacked and nearly 37 million customers’ private data was published by hackers.
The following year, BeautifulPeople.com was hacked and the responsible cyber criminals sold the data of 1.1 million users, including personal habits, weight, height, eye color, job, education and more, online. Then there’s the AdultFriendFinder hack, Tinder profile scraping, Jack’d data exposure, and now the very shady practice of data brokers selling online data profiles by the millions.
In other words, between the apparent lack of protection and cyber criminals vying to get a hold of such personal data—whether to sell it for profit, publicly embarrass users, steal identities or build a profile on individuals for compromise—the opportunity and motivation to hack dating apps are high.
Dating is hard enough as it is, without the threat of data breaches. So how can you best protect yourself?
First thing’s first: Before you sign up for an app, conduct your due diligence. Does your app use SSL-encrypted data transfers? Does it share your data with third parties? Does it authorize through Facebook (which lacks a certificate verification)? Does the company accept any liability to protect your data?
Once you’ve joined a dating app or site, beware of what personal information you share. Oversharing details (education level, job, social media handles, contact information, religion, hobbies, information about your kids, etc.), especially when combined with geo-matching, allows creepy would-be daters to build a playbook on how to target or blackmail you. And if that data is breached and sold or otherwise publicly released, your reputation and safety could be at risk.
Likewise, switch up your profile photos. Because so many apps are connected via Facebook, using the same picture across social platforms lets potential criminals connect the dots and identify you, even if you use an anonymous handle.
Finally, you should use a VPN and ensure your mobile device is up-to-date with security features so that you mitigate cyber risks while you’re swiping left or right.
It’s always better to be safe and secure than sorry.
By next year, it is estimated that there will be 20.4 billion IoT devices, with businesses accounting for roughly 7.6 billion of them. While these devices are the next wireless innovation to improve productivity in an ever-connected world, they also represent nearly 8 billion opportunities for breaches or attacks.
In fact, 97% of companies believe IoT devices could wreak havoc on their organizations, and with good reason. Security flaws can leave millions of devices vulnerable, creating pathways for cyber criminals to exfiltrate data—or worse. For example, a July 2018 report disclosed that nearly 500 million IoT devices were susceptible to cyberattacks at businesses worldwide because of a decade old web exploit.
A New Attack Environment
In other words, just because these devices are new and innovative doesn’t mean your security is, too. To further complicate matters, 5G networks will begin to roll out in 2020, creating a new atmosphere for mobile network attacks. Hackers will be able to exploit IoT devices and leverage the speed, low latency and high capacity of 5G networks to launch unprecedented volumes of sophisticated attacks, ranging from standard IoT attacks to burst attacks, and even smartphone infections and mobile operating system malware.
So, who is responsible for securing these billions of devices to ensure businesses and consumers alike are protected? Well, right now, nobody. And there’s no clear agreement on what entity is—or should be—held accountable. According to Radware’s 2017-2018 Global Application & Network Security Report, 34% believe the device manufacturer is responsible, 11% believe service providers are, 21% think it falls to the private consumer, and 35% believe business organizations should be liable.
Ownership Is Opportunity
Indeed, no one group is raising its hand to claim ownership of IoT device security. But if service providers want to protect their networks and customers, they should jump at the chance to take the lead here. While service providers technically don’t own the emerging security issues, it is ultimately the operators who are best positioned to deal with and mitigate attack traffic. While many may view this as an operational cost, it is, in actuality, a business opportunity.
In fact, the Japanese government is so concerned about a large scale IoT attack disrupting the 2020 Tokyo Olympics, they just passed a law empowering the government to intentionally identify and hack vulnerable IoT devices. And who is the government asking to secure the list of devices they find vulnerable? Consumers? Businesses? Manufacturers? No, No, and NO. They are asking service providers to secure these devices from attacks.
Think about it: Every device connected to a network is another potential security weakness. And as we’ve written about previously, IoT devices are especially vulnerable because of manufacturers’ priority to maintain low costs, rather than spending more on additional security features. If mobile service providers create a secure environment that satisfies the protection of customer data and devices, they can establish a competitive advantage and reap financial rewards.
From Opportunity to Rewards
This translates to the potential for capturing new revenue streams. If your mobile network is more secure than your competitors’, it stands to reason that their customer attrition becomes your win. And mobile IoT businesses will pay an additional service premium for the knowledge that their IoT devices won’t be compromised and can maintain 100% availability.
What’s more, service providers need to be mindful of history repeating itself. After providers lost the war with Apple and Google to control apps (and their associated revenue), they earned the unfortunate reputation of being “dumb pipes.” Conversely, Apple and Google were heralded for capturing all the value of the explosion of mobile data apps. Apple now sits with twice the valuation as AT&T and Verizon, COMBINED. Now, as we are on the precipice of a similar explosion of IoT apps that enterprises will buy, the question again arises over whether service providers will just sell “dumb pipes” or whether they will get involved in the value chain.
A word to the wise: Don’t be a “dumb” carrier. Be smart. Secure the customer experience and reap the benefits.
Read “Creating a Secure Climate for your Customers” today.
The S in HTTPS is supposed to mean that encrypted traffic is secure. For attackers, it just means that they have a larger attack surface from which to launch assaults on the applications to exploit the security vulnerabilities. How should organizations respond?
Most web traffic is encrypted to provide better privacy and security. By 2018, over 70% of webpages are loaded over HTTPS. Radware expects this trend to continue until nearly all web traffic is encrypted. The major drivers pushing adoption rates are the availability of free SSL certificates and the perception that clear traffic is insecure.
While encrypting traffic is a vital practice for organizations, cyber criminals are not necessarily deterred by the practice. They are looking for ways to take advantage of encrypted traffic as a platform from which to launch attacks that can be difficult to detect and mitigate, especially at the application layer. As encrypted applications grow more complex, the potential attack surface is larger. Organizations need to incorporate protection of the application layer as part of their overall network security strategies. Results from the global industry survey revealed a 10% increase in encrypted attacks on organizations by 2018.
Encrypted Application Layers
When planning protection for encrypted applications, it is important to consider all of the layers that are involved in delivering an application. It is not uncommon for application owners to focus on protecting the encrypted application layer while overlooking the lower layers in the stack which might be vulnerable. In many cases, protection selected for the application layer may itself be vulnerable to transport-layer attacks.
To ensure applications are protected, organizations need to analyze the following Open Systems Interconnection (OSI) layers:
- Transport — In most encrypted applications, the underlying transport is TCP. TCP attacks come in many forms, so volumes and protection must be resilient to protect
applications from attacks on the TCP layer. Some applications now use QUIC, which uses UDP as the underlying layer and adds reflection and amplification risks to the mix.
- Session — The SSL itself is vulnerable. Once an SSL/TLS session is created, the server invests about 15 times more compute power than the client, which makes the session layer particularly vulnerable and attractive to attackers.
- Application — Application attacks are the most complex type of attack, and encryption only makes it harder for security solutions to detect and mitigate them.Attackers often select specific areas in applications to generate a high request-to-load ratio, may attack several resources simultaneously to make detection harder, or may mimic legitimate user behavior in various ways to bypass common application security solutions.The size of an attack surface is determined by the application design. For example, in a login attack, botnets perform multiple login attempts from different sources to try to stress the application. The application login is always encrypted and requires resources on the application side such as a database, authentication gateway or identity service invocation. The attack does not require a high volume of traffic to affect the application, making it very hard to detect.
Organizations also need to consider the overall environment and application structure because it greatly affects the selection of the ideal security design based on a vulnerability assessment.
- Content Delivery Network — Applications using a content delivery network (CDN) generate a challenge for security controls which are deployed at the origin. Technologies that use the source IP for analyzing client application behavior only see the source IP of the CDN. There is a risk that the solutions will either over mitigate and disrupt legitimate users or become ineffective. High rates of false positives prove that protection based on source IP addresses is pointless. Instead, when using a CDN, the selected security technology should have the right measures to analyze attacks that originate behind it, including device fingerprinting or extraction of the original source from the application headers.
- Application Programming Interface — Application programming interface (API) usage is common in all applications. According to Radware’s The State of Web Application Security report, a third of attacks against APIs intends to yield a denial-of-service state. The security challenge here comes from the legitimate client side. Many solutions rely on various active user validation techniques to distinguish legitimate users from attackers. These techniques require that a real browser reside at the client. In the case of an API, many times a legitimate browser is not at the client side, so the behavior and legitimate response to various validation challenges is different.
- Mobile Applications — Like APIs, the client side is not a browser for a mobile application and cannot be expected to behave and respond like one. Mobile applications pose a challenge because they rely on different operating systems and use different browsers. Many security solutions were created based on former standards and common tools and have not yet fully adapted. The fact that mobile apps process a high amount of encrypted traffic increases the capacity and security challenges.
- Directionality — Many security solutions only inspect inbound traffic to protect against availability threats. Directionality of traffic has significant implications on the protection efficiency because attacks usually target the egress path of the application. In such cases, there might not be an observed change in the incoming traffic profile, but the application might still become unavailable. An effective security solution must process both directions of traffic to protect against sophisticated application attacks.
Major selection criterion for security solutions is regulatory compliance. In the case of encrypted attacks, compliance requirements examine whether traffic is decrypted, what parts of traffic are decrypted and where the decryption happens. The governing paradigm has always been that the more intrusive the solution, the more effective the security, but that is not necessarily the case here. Solutions show different levels of effectiveness for the same intrusiveness.
The encryption protocol in use has implications toward how security can be applied and what types of vulnerabilities it represents. Specifically, TLS 1.3 generates enhanced security from the data privacy perspective but is expected to generate challenges to security solutions which rely on eavesdropping on the encrypted connection. Users planning to upgrade to TLS 1.3 should consider the future resiliency of their solutions.
Determining attack patterns is the most important undertaking that organizations must master. Because there are so many layers that are vulnerable, attackers can easily change their tactics mid-attack. The motivation is normally twofold: first, inflicting maximum impact with minimal cost; second, making detection and mitigation difficult.
- Distribution — The level of attack distribution is very important to the attacker. It impacts the variety of vectors that can be used and makes the job harder for the security controls. Most importantly, the more distributed the attack, the less traffic each attacking source has to generate. That way, behavior can better resemble legitimate users. Gaining control of a large botnet used to be difficult to do and extremely costly. With the growth in the IoT and corresponding IoT botnets, it is common to come across botnets consisting of hundreds of thousands of bots.
- Overall Attack Rates — The overall attack traffic rate varies from one vector to another. Normally, the lower the layer, the higher the rate. At the application layer, attackers are able to generate low-rate attacks, which still generate significant impact. Security solutions should be able to handle both high- and low-rate attacks, without compromising user experience and SLA.
- Rate per Attacker — Many security solutions in the availability space rely on the rate per source to detect attackers. This method is not always effective as highly distributed attacks proliferate.
- Connection Rates — Available attack tools today can be divided into two major classes based on their connection behavior. The first class includes tools that open a single connection and generate many. The second includes tools that generate many connections with only a single request or very few requests on each connection. Security tools that can analyze connection behavior are more effective in discerning legitimate users from attackers.
- Session Rates — SSL/TLS session behavior has various distinct behavioral characteristics in legitimate users and browsers. The major target is to optimize performance and user experience. Attack traffic does not usually fully adhere to those norms, so its SSL session behavior is different. The ability to analyze encryption session behavior contributes to protecting both the encryption layer and the underlying application layer.
- Application Rates — Because the application is the most complex part to attack, attackers have the most degree of freedom when it comes to application behavior. Attack patterns vary greatly from one attack to another in terms of how they appear on application behavior analyses. At the same time, the rate of change in the application itself is very high, such that it cannot be followed manually. Security tools that can automatically analyze a large variety of application aspects and, at the same time, adapt to changes quickly are expected to be more effective in protecting from encrypted application attacks.
Protection from encrypted availability attacks is becoming a mandatory requirement for organizations. At the same time, it is one of the more complex tasks to thoroughly perform without leaving blind spots. When considering a protection strategy, it is important to take into account various aspects of the risk and to make sure that, with all good intentions, the side door is not left open.
Whether you’re an executive or practitioner, brimming with business acumen or tech savviness, your job is to preserve and grow your company’s brand. Brand equity relies heavily on customer trust, which can take years to build and only moments to demolish. 2018’s cyber threat landscape demonstrates this clearly; the delicate relationship between organizations and their customers is in hackers’ cross hairs and suffers during a successful cyberattack. Make no mistake: Leaders who undervalue customer trust–who do not secure an optimized customer experience or adequately safeguard sensitive data–will feel the sting in their balance sheet, brand reputation and even their job security.
Radware’s 2018-2019 Global Application and Network Security report builds upon a worldwide industry survey encompassing 790 business and security executives and professionals from different countries, industries and company sizes. It also features original Radware threat research, including an analysis of emerging trends in both defensive and offensive technologies. Here, I discuss key takeaways.
Repercussions of Compromising Customer Trust
Without question, cyberattacks are a viable threat to operating expenditures (OPEX). This past year alone, the average estimated cost of an attack grew by 52% and now exceeds $1 million (the number of estimations above $1 million increased 60%). For those organizations that formalized a real calculation process rather than merely estimate the cost, that number is even higher, averaging $1.67 million.
Despite these mounting costs, three in four have no formalized procedure to assess the business impact of a cyberattack against their organization. This becomes particularly troubling when you consider that most organizations have experienced some type of attack within the course of a year (only 7% of respondents claim not to have experienced an attack at all), with 21% reporting daily attacks, a significant rise from 13% last year.
There is quite a range in cost evaluation across different verticals. Those who report the highest damage are retail and high-tech, while education stands out with its extremely low financial impact estimation:
Repercussions can vary: 43% report a negative customer experience, 37% suffered brand reputation loss and one in four lost customers. The most common consequence was loss of productivity, reported by 54% of survey respondents. For small-to-medium sized businesses, the outcome can be particularly severe, as these organizations typically lack sufficient protection measures and know-how.
It would behoove all businesses, regardless of size, to consider the following:
- Direct costs: Extended labor, investigations, audits, software patches development, etc.
- Indirect costs: Crisis management, fines, customer compensation, legal expenses, share value
- Prevention: Emergency response and disaster recovery plans, hardening endpoints, servers and cloud workloads
Risk Exposure Grows with Multi-Dimensional Complexity
As the cost of cyberattacks grow, so does the complexity. Information networks today are amorphic. In public clouds, they undergo a constant metamorphose, where instances of software entities and components are created, run and disappear. We are marching towards the no-visibility era, and as complexity grows it will become harder for business executives to analyze potential risks.
The increase in complexity immediately translates to a larger attack surface, or in other words, a greater risk exposure. DevOps organizations benefit from advanced automation tools that set up environments in seconds, allocate necessary resources, provision and integrate with each other through REST APIs, providing a faster time to market for application services at a minimal human intervention. However, these tools are processing sensitive data and cannot defend themselves from attacks.
Protect your Customer Experience
The report found that the primary goal of cyber-attacks is service disruption, followed by data theft. Cyber criminals understand that service disruptions result in a negative customer experience, and to this end, they utilize a broad set of techniques. Common methods include bursts of high traffic volume, usage of encrypted traffic to overwhelm security solutions’ resource consumption, and crypto-jacking that reduces the productivity of servers and endpoints by enslaving their CPUs for the sake of mining cryptocurrencies. Indeed, 44% of organizations surveyed suffered either ransom attacks or crypto-mining by cyber criminals looking for easy profits.
What’s more, attack tools became more effective in the past year; the number of outages grew by 15% and more than half saw slowdowns in productivity. Application layer attacks—which cause the most harm—continue to be the preferred vector for DDoSers over the network layer. It naturally follows, then, that 34% view application vulnerabilities as the biggest threat in 2019.
Essential Protection Strategies
Businesses understand the seriousness of the changing threat landscape and are taking steps to protect their digital assets. However, some tasks – such as protecting a growing number of cloud workloads, or discerning a malicious bot from a legitimate one – require leveling the defense up. Security solutions must support and enable the business processes, and as such, should be dynamic, elastic and automated.
Analyzing the 2018 threat landscape, Radware recommends the following essential security solution capabilities:
- Machine Learning: As hackers leverage advanced tools, organizations must minimize false positive calls in order to optimize the customer experience. This can be achieved by machine-learning capabilities that analyze big data samples for maximum accuracy (nearly half of survey respondents point at security as the driver to explore machine-learning based technologies).
- Automation: When so many processes are automated, the protected objects constantly change, and attackers quickly change lanes trying different vectors every time. As such, a security solution must be able to immediately detect and mitigate a threat. Solutions based on machine learning should be able to auto tune security policies.
- Real Time Intelligence: Cyber delinquents can disguise themselves in many forms. Compromised devices sometimes make legitimate requests, while other times they are malicious. Machines coming behind CDN or NAT can not be blocked based on IP reputation and generally, static heuristics are becoming useless. Instead, actionable, accurate real time information can reveal malicious activity as it emerges and protect businesses and their customers – especially when relying on analysis and qualifications of events from multiple sources.
- Security Experts: Keep human supervision for the moments when the pain is real. Human intervention is required in advanced attacks or when the learning process requires tuning. Because not every organization can maintain the know-how in-house at all times, having an expert from a trusted partner or a security vendor on-call is a good idea.
It is critical for organizations to incorporate cybersecurity into their long-term growth plans. Securing digital assets can no longer be delegated solely to the IT department. Rather, security planning needs to be infused into new product and service offerings, security, development plans and new business initiatives. CEOs and executive teams must lead the way in setting the tone and invest in securing their customers’ experience and trust.
Millennials are the largest generation in the U.S. labor force—a position they’ve held since 2016—and they’re involved in the majority (73%) of B2B purchasing decisions. Raised in the age of the Internet, they’re digital natives and easily adopt and adapt to new technologies. And mobile apps are their lifelines.
Why does this matter? Well, when you combine Millennials’ tech savviness with their business acumen, their clout in a digital economy comes into focus. As both decision-makers and connoisseurs of mobile technology, they can make or break you in a low-growth economy if your business model doesn’t square with their preferences.
In other words, if you’re not embracing mobile commerce, you may soon be ancient history. This generation has little-to-no use for brick-and-mortar storefronts, banks, etc., instead preferring to use apps for shopping, financial transactions and more.
Of course, making m-commerce a linchpin of your business model isn’t risk free; cybersecurity concerns are of critical importance. Increasingly, personal data protection is tied directly to consumer loyalty to a particular brand, and Millennials in particular care about how their data is used and safeguarded.
You Can’t Rush Greatness
While Millennials are renowned for an “I want it fast, and I want it now” attitude (which explains why 63% of them use their smartphone to shop every day, versus trekking to a store), the biggest mistake you can make is overlooking security in a rush to roll out a mobile strategy.
The fact is, vulnerabilities on m-commerce platforms can result in severe financial impacts; the average cost of a corporate data breach is $3.86 million. If a mobile app or mobile responsive e-commerce site is hit by an application attack, for example, short-term profit loss (which can escalate quickly) and longer-term reputation loss are serious risks. And as we move into 2019, there are several mobile security threats that we need to take seriously.
Baking cybersecurity into your mobile strategy—as a core component, not an add-on—is, without question, necessary. The reason is manifold: For one thing, mobile devices (where your app primarily lives) are more susceptible to attacks. Secondly, mobile commerce websites are often implemented with a web application firewall to protect it. Thirdly, Millennials’ reliance on m-commerce, both as B2B and B2C consumers, means you stand to lose significant business if your app or website go “down.” And finally, Millennials are security conscious.
Securing the Secure Customer Experience
So how can you help ensure your m-commerce platform, and thereby your Millennial customer base, is secure? A number of ways:
- Guard your app’s code from the get-go. Test the code for vulnerabilities, ensure it’s easy to patch, and protect it with encryption.
- Consider a Web Application Firewall (WAF) to secure your APIs and your website.
- Run real-time threat analytics.
- Be mindful of how customer data is stored and secured. (Don’t pull an Uber and store data unencrypted!)
- Patch often. Because security threats evolve constantly, so must your security patches! Just ask Equifax about the importance of patching…
Of course, this isn’t an exhaustive list of proactive security measures you can take, but it’s a good start. As I’ve said time and time again, in an increasingly insecure world where security and availability are the cornerstones of the digital consumer, cybersecurity should never be placed on the back burner of company priorities. Don’t wait for an attack to up your security game. At that point, trust is broken with your Millennial customer base and your business is in trouble. Be proactive. Always.
Serverless architectures are revolutionizing the way organizations procure and use enterprise technology. Until recently, information security architecture was relatively simple; you built a fortress around a server containing sensitive data, and deployed security solutions to control the flow of users accessing and leaving that server.
But how do you secure a server-less environment?
The Basics of Serverless Architecture
Serverless architecture is an emerging trend in cloud-hosted environments and refers to applications that significantly depend on third-party services (known as Backend-as-a-Service or “BaaS”) or on custom code that’s run in ephemeral containers (known as Function-as-a-Service or “FaaS”). And it is significantly more cost effective than buying or renting servers.
The rapid adoption of micro-efficiency-based pricing models (a.k.a PPU, or pay-per-use) pushes public cloud providers to introduce a business model that meets this requirement. Serverless computing helps providers optimize that model by dynamically managing the allocation of machine resources. As a result, organizations pay based on the actual amount of resources their applications consume, rather than ponying up for pre-purchased units of workload capacity (which is usually higher than what they utilize in reality).
What’s more, going serverless also frees developers and operators from the burdens of provisioning the cloud workload and infrastructure. There is no need to deploy operating systems and patch them, no need to install and configure web servers, and no need to set up or tune auto-scaling policies and systems.
Security Implications of Going Serverless
The new serverless model coerces a complete change in architecture – nano services of a lot of software ‘particles.’ The operational unit is set of function containers that execute REST API functions, which are invoked upon a relevant client-side event. These function instances are created, run and then terminated. During their run time, they receive, modify and send information that organizations want to monitor and protect. The protection should be dynamic and swift:
- There is no perimeter or OS to secure
- Agents and a persistent footprint become redundant.
- To optimize the business model, the solution must be scalable and ephemeral automation is the key to success
If we break down our application into components that run in a serverless model, the server that runs the APIs uses different layers of code to parse the requests, essentially enlarging the attack surface. However, this isn’t an enterprise problem anymore; it’s the cloud provider’s. Unfortunately, even they sometimes lag in patch management and hardening workloads. Will your DevOps read all of the cloud provider documentation in details? Most likely, they’ll go with generic permissions. If you want to do something right, you better do it yourself.
Serverless computing doesn’t eradicate all traditional security concerns. Application-level vulnerabilities can still be exploited—with attacks carried out by human hackers or bots—whether they are inherent in the FaaS infrastructure or in the developer function code.
When using a FaaS model, the lack of local persistent storage encourages data transfer between the function and the different persistent storage services (e.g., S3 and DynamoDB by AWS) instead. Additionally, each function eventually processes data received from storage, the client application or from a different function. Every time it’s moved, it becomes vulnerable to leakage or tampering.
In such an environment, it is impossible to track all potential and actual security events. One can’t follow each function’s operation to prevent it from accessing wrong resources. Visibility and forensics must be automated and perform real time contextual analysis. But the question is not whether to use serverless or not because it is more in/secure. Rather, the question is how to do it when your organization goes there.
A New Approach
Simply put, going serverless requires a completely different security approach—one that is dynamic, elastic, and real-time. The security components must be able to move around at the same pace as the applications, functions and data they protect.
First thing’s first: To help avoid code exploitation (which is what attacks boil down to), use encryption and monitor the function’s activity and data access so it has, by default, minimum permissions. Abnormal function behavior, such as expected access to data or non-reasonable traffic flow, must be analyzed.
Next, consider additional measures, like a web application firewall (WAF), to secure your APIs. While an API gateway can manage authentication and enforce JSON and XML validity checks, not all API gateways support schema and structure validation, nor do they provide full coverage of OWASP top 10 vulnerabilities like a WAF does. WAFs apply dozens of protection measures on both inbound and outbound traffic, which is parsed to detect protocol manipulations. Client-side inputs are validated and thousands of rules are applied to detect various injections attacks, XSS attacks, remote file inclusion, direct object references and many more.
In addition to detecting known attacks, for the purposes of zero-day attack protection and comprehensive application security, a high-end WAF allows strict policy enforcement where each function can have its own parameters white listed—the recommended approach when deploying a function processing sensitive data or mission-critical business logic.
And—this is critical—continue to mitigate for DDoS attacks. Going serverless does not eliminate the potential for falling susceptible to these attacks, which have changed dramatically over the past few years. Make no mistake: With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever.
In my previous blog post, I addressed the need and the process of creating applications faster and building an adaptive infrastructure that suits my real consumption. Today I will highlight how automation can help to ensure that security has an adaptive infrastructure and manages traffic irregularities.
How Can I Guarantee My Security Level?
By using automation, we can also guarantee a level of security on any new application by automatically deploying security rules when a new app is published. No risk of human error or risk to forget something; when a new app is deployed, the security is attached automatically. This is very powerful but needs to be very “industrial”. Exceptions are not the friend of automation; this is very important to standardize applications for use with automation.
IoT is the first threat of a DDoS attack because apps are provisioned very fast, but there is none on the security level. A lot of Botnets are targeting IoT to gain access to many devices. There are several apps and vulnerabilities that hackers can exploit to have access to these devices and create a very large botnet.
How Can I Have an Adaptive Infrastructure?
With Google Kubernetes, it is very easy to add more containers (or pods) to an application in order to be able to handle more client connections. Kubernetes has its own load balancing mechanisms to share the load between several containers. However, this service is very limited and cannot give access to all the features that we need on a reverse proxy to expose the application to the rest of the world (nat, SSL offload, L7 load balancing, etc.)
By using intermediate orchestrator for L4-L7 services such as load balancing, DDoS and WAF – acting as an abstraction layer – this orchestrator can be updated of any changes from Kubernetes and trigger automation workflow to update the infrastructure accordingly:
- Modify/create/scale up/scale down an ADC service to expose the app outside with full capabilities, including ADC (SSL, NAT, L7 modification, l7 load balancing, persistence, cache, TCP optimization)
- Modify/create/scale up/scale down DDoS or WAF services to protect this new exposed application
How Can I Manage Exceptional Events That Temporarily Increases My Traffic?
Considering the example of VOD service, we understand that this service will be used differently depending on the time of day. It will experience huge peaks of traffic in the evening when people are watching their TVs but during the day, the traffic will dramatically decrease as most people are at work.
If you scale your application and infrastructure to manage your peak of traffic in the evening, it will cost a lot and this compute will not be used during the day, this is not optimized.
With automation, we can do something smarter by provisioning compute resource accordingly with real needs. That means that my application will run on a few servers during the day and run on several servers during the evening. If I use the public cloud to host my application, I will pay only for my consumption and will not pay for a lot of computing power during the day that I don’t use.
Again, this agility should be at the application layer but also at the infrastructure layer. My ADC, anti-DDoS or WAF services should not be scalable for my peak traffic in the evening but should be adaptive with my real load.
Using an intermediate automation orchestrator can provide an intelligent workflow to follow this trend. In the evening, it can automatically provision new ADC, DDoS, or WAF services on new hosts to provide more computing power and handle a lot of client requests. Then, de-provision it when they are not needed.
It is important to also have a flexible license model with a license server that dynamically dispatches the license to the ADC, WAF, or DDoS services.
With an intermediate orchestrator, Radware technologies can be used in complex SDDC environment. It provides an abstraction layer based on a workflow that simplifies integration with an external tool like Ansible, Cisco ACI, Juniper Contrail, OpenStack, and Google Kubernete.
vDirect exposes a rest API that is used to trigger a workflow. For example, a workflow can “manage virtual service” with 3 actions:
- Create a new virtual service (real server, server group, load balancing algorithm, health check, DDoS, WAF, etc.)
- Modify an existing virtual service (add a real server, change DDoS rules, change load balancing algorithms, etc.)
- Delete an existing virtual service (delete ADC, DDoS, WAF, configuration).
From an external orchestrator, REST calls are very simple with only one REST call on workflow “manage virtual service”. With all necessary parameters, vDirect can do all the automation on Radware devices such as ADC, anti-DDoS, and WAF.