As organizations break their applications down into microservices, the responsibility for securing these environments is shifting as well.
Application development now requires NetOps and SecOps to work together, thereby putting increased stress on developers and engineers.
Many organizations use public cloud service providers, some in addition to their private cloud and on premise deployments. The right product mix not only reduces vendor lock-in and shadow IT, but is also an enabler for the constituents that includes IT administrators, network and security operations, as well as DevOps.
Maintaining application security and configurations across multiple environments is complex AND error prone and increases the attack surface. Careful testing is required to protect business-critical applications from hacking attempts, which may include denial of service, network and application attacks, malware and bots and impersonation.
A successful implementation will not only include the right cloud provider, the correct security, licensing and cost model, but also the appropriate automation tools to help secure the technology and security landscape consistently as applications are rolled out in a continuous integration and continuous delivery (CI/CD) process.
When Does Automation Become a Pressing Issue?
The reasons to automate may be due to resource constraints, configuration management, compliance or monitoring. For example, an organization may have very few people managing a large set of configurations, or the required skill set spans networking AND security products, or perhaps the network operation team does not have the operational knowledge of all the devices they are managing.
Below are a few benefits that automation provides:
- Time savings and fewer errors for repetitive tasks
- Cost reduction for complex tasks that require specialized skills
- Ability to react quickly to events, for example,
- Automatically commission new services at 80% utilization and decommission at 20%
- Automatically adjust security policies to optimally address peace-time and attack traffic
Automate the Complexity Away?
Let us consider a scenario where a development engineer has an application ready and needs to test application scalability, business continuity and security using a load balancer, prior to rolling out through the IT.
The developer may not have the time to wait for a long provisioning timeline, or the expertise and familiarity with the networking and security configurations. The traditional way would be to open a ticket, have an administrator reach out, understand the use case and then create a custom load balancer for the developer to test. This is certainly expensive to do, and it hinders CI/CD processes.
The objective here would be enable self-service, in a way that the developer can relate to and work with to test against the load balancer without networking and security intricacies coming in the way. A common way is by creating a workflow that automates tasks using templates, and if the workflow spans multiple systems, then hides the complexity from the developer by orchestrating them.
The successful end-to-end automation consists of several incremental steps that build upon each other. For example, identify all use cases that administrators take that are prone to introducing errors in configuration. Then make them scripted – say, using CLI or python scripts. Now you’re at a point where you’re ready to automate.
You’ll have to pick automation and orchestration tools that’ll help you simplify the tasks, remove the complexity and make it consumable to your audience. Most vendors provide integrations for commonly used automation and orchestration systems – Ansible, Chef, Puppet, Cisco ACI and Vmware vRealize – just to name a few.
Before you embark on the automation journey, identify the drivers, tie it to business needs and spend some time planning the transition by identifying the use cases and tools in use. Script the processes manually and test before automating using tools of your choice.
Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.
If you are in the information security business like me, you have probably improved your frequent flyer status recently. Indeed, May-June are when most industry events occur. Like birds, we fly when spring arrives.
In this blog, I’ll share some thoughts based on conversations I had during my own journeys, including those at the global OWASP conference in Tel Aviv, Israel.
The audience was mostly split between developers and researchers, and then me, supposedly the only marketing guy within a mile radius. Since the event was held in Tel Aviv–an information security innovation hub–the vendor/customer ratio was higher than usual.
DevOps Least Favorite Word is “Security”
According to Radware’s C-Suite survey, 75% of organizations have turned information security into a marketing message. Meaning, executives understand that consumers are looking for secure products and services, and actively sell to that notion.
But do developers share the same insight, or accountability?
By nature, information security is the enemy of the agile world. In an age where software development has shifted from 80% code writing and 20% integration to 20% code writing and 80% integration, all DevOps have to do is assemble the right puzzle of scalable infrastructure, available open source modules and their end-to-end automation and orchestration tools for provisioning, run-time management and even security testing.
In other words, there’s no need to start from scratch today. Being familiar with more tools and how to efficiently navigate in Github (and other open-source communities) can yield more success than coding skills. Moreover, it yields faster time-to-market, which seems to be everybody’s interest.
Agility is the Name of the Game
As I mentioned, the global OWASP event attracted many vendors. However, will pitching ‘best of breed security’ do the trick? If you are the only one that can block rare attacks that only sophisticated hackers can carry out, is there a real business opportunity for your start-up to grow?
Well, DevOps says no!
And they are right. Running applications in the public cloud is all about efficiency and scale. Serverless and micro-services architecture fragment monolithic applications to components that are created, run and vanish without any supervision or visibility of the developer. It is done via end-to-end automation where the main orchestration tool is Kubernetes.
This is agility.
Building Secure Products and Services
Both efficiency and agility are legitimate business objectives. Why would security interfere with their list of ‘what if’s?
Ironically, success doesn’t depend on how well an application security solution detects and mitigates attacks. It correlates better with how well the solution integrates into the SDLC (software development lifecycle), which essentially means it can interoperate with these orchestration and automation tools.
Before building security features, vendors should think of hands-off implementation, auto-scale, zero to minimal day-to-day management and APIs to exchange data with other tools in the customer environment.
Once all that is in place, it’s time to proceed to security and start building the algorithmics of the detection engines and mitigation manners.
Keep in mind security can’t be static anymore, but rather dynamic and evolving. Solutions must be able to learn and profile the behavior of traffic to the application and create policies automatically, adjusting the rules overtime when changes are introduced by the dev side. This is key for CI/CD because the last thing they want to hear about is going back to the code to reassess and test its logic, because every wrong decision translate to either a customer left out (false positives), or an attacker allowed in (false negatives).
Self-sufficient algorithmics reduces TCO significantly by reducing the required management labor – a plague in old application security solutions.
To auto-policy-generation DevOps says yes, and allow the executives to market secure products and services.
Read “2019 C-Suite Perspectives: From Defense to Offense, Executives Turn Information Security into a Competitive Advantage” to learn more.
Modern applications are difficult to secure. Whether they are web or mobile, custom developed or SaaS-based, applications are now scattered across different platforms and frameworks. To accelerate service development and business operations, applications rely on third-party resources that they interact with via APIs, well-orchestrated by state-of-the-art automation and synchronization tools. As a result, the attack surface becomes greater as there are more blind spots – higher exposure to risk.
Applications, as well as APIs, must be protected against an expanding variety of attack methods and sources and must be able to make educated decisions in real time to mitigate automated attacks. Moreover, applications constantly change, and security policies must adopt just as fast. Otherwise, businesses face increased manual labor and operational costs, in addition to a weaker security posture.
The WAF Ten Commandments
The OWASP Top 10 list serves as an industry benchmark for the application security community, and provides a starting point for ensuring protection from the most common and virulent threats, application misconfigurations that can lead to vulnerabilities, and detection tactics and mitigations. It also defines the basic capabilities required from a Web Application Firewall in order to protect against common attacks targeting web applications like injections, cross-site scripting, CSRF, session hijacking, etc. There are numerous ways to exploit these vulnerabilities, and WAFs must be tested for security effectiveness.
However, vulnerability protection is just the basics. Advanced threats force application security solutions to do more.
Challenge 1: Bot Management
52% of internet traffic is bot generated, half of which is attributed to “bad” bots. Unfortunately, 79% of organizations can’t make a clear distinction between good and bad bots. The impact is felt across all business arms as bad bots take over user accounts and payment information, scrape confidential data, hold up inventory and skew marketing metrics, thus leading to wrong decisions. Sophisticated bots mimic human behavior and easily bypass CAPTCHA or other challenges. Distributed bots render IP-based and even device fingerprinting based protection ineffective. Defenders must level up the game.
Challenge 2: Securing APIs
Machine-to-machine communications, integrated IoTs, event driven functions and many other use cases leverage APIs as the glue for agility. Many applications gather information and data from services with which they interact via APIs. Threats to API vulnerabilities include injections, protocol attacks, parameter manipulations, invalidated redirects and bot attacks. Businesses tend to grant access to sensitive data, without inspecting nor protect APIs to detect cyberattacks. Don’t be one of them.
Challenge 3: Denial of Service
Different forms of application-layer DoS attacks are still very effective at bringing application services down. This includes HTTP/S floods, low and slow attacks (Slowloris, LOIC, Torshammer), dynamic IP attacks, buffer overflow, Brute Force attacks and more. Driven by IoT botnets, application-layer attacks have become the preferred DDoS attack vector. Even the greatest application protection is worthless if the service itself can be knocked down.
Challenge 4: Continuous Security
For modern DevOps, agility is valued at the expense of security. Development and roll-out methodologies, such as continuous delivery, mean applications are continuously modified. It is extremely difficult to maintain a valid security policy to safeguard sensitive data in dynamic conditions without creating a high number of false positives. This task has gone way beyond humans, as the error rate and additional costs they impose are enormous. Organizations need machine-learning based solutions that map application resources, analyze possible threats, create and optimize security policies in real time.
Protecting All Applications
It’s critical that your solution protects applications on all platforms, against all attacks, through all the channels and at all times. Here’s how:
- Application security solutions must encompass web and mobile apps, as well as APIs.
- Bot Management solutions need to overcome the most sophisticated bot attacks.
- Mitigating DDoS attacks is an essential and integrated part of application security solutions.
- A future-proof solution must protect containerized applications, serverless functions, and integrate with automation, provisioning and orchestration tools.
- To keep up with continuous application delivery, security protections must adapt in real time.
- A fully managed service should be considered to remove complexity and minimize resources.
Read “Radware’s 2018 Web Application Security Report” to learn more.
We have all heard the horror tales: a negligent (or uniformed) developer inadvertently exposes AWS API keys online, only for hackers to find those keys, penetrate the account and cause massive damage.
But how common, in practice, are these breaches? Are they a legitimate threat, or just an urban legend for sleep-deprived IT staff? And what, if anything, can be done against such exposure?
The Problem of API Access Key Exposure
The problem of AWS API access key exposure refers to incidents in which developer’s API access keys to AWS accounts and cloud resources are inadvertently exposed and found by hackers.
AWS – and most other infrastructure-as-as-service (IaaS) providers – provides direct access to tools and services via Application Programming Interfaces (APIs). Developers leverage such APIs to write automatic scripts to help them configure cloud-based resources. This helps developers and DevOps save much time in configuring cloud-hosted resources and automating the roll-out of new features and services.
In order to make sure that only authorized developers are able to access those resource and execute commands on them, API access keys are used to authenticate access. Only code containing authorized credentials will be able to connect and execute.
This Exposure Happens All the Time
The problem, however, is that such access keys are sometimes left in scripts or configuration files uploaded to third-party resources, such as GitHub. Hackers are fully aware of this, and run automated scans on such repositories, in order to discover unsecured keys. Once they locate such keys, hackers gain direct access to the exposed cloud environment, which they use for data theft, account takeover, and resource exploitation.
A very common use case is for hackers to access an unsuspecting cloud account and spin-up multiple computing instances in order to run crypto-mining activities. The hackers then pocket the mined cryptocurrency, while leaving the owner of the cloud account to foot the bill for the usage of computing resources.
Examples, sadly, are abundant:
- A Tesla developer uploaded code to GitHub which contained plain-text AWS API keys. As a result, hackers were able to compromise Tesla’s AWS account and use Tesla’s resource for crypto-mining.
- WordPress developer Ryan Heller uploaded code to GitHub which accidentally contained a backup copy of the wp-config.php file, containing his AWS access keys. Within hours, this file was discovered by hackers, who spun up several hundred computing instances to mine cryptocurrency, resulting in $6,000 of AWS usage fees overnight.
- A student taking a Ruby on Rails course on Udemy opened up a AWS S3 storage bucket as part of the course, and uploaded his code to GitHub as part of the course requirements. However, his code contained his AWS access keys, leading to over $3,000 of AWS charges within a day.
- The founder of an internet startup uploaded code to GitHub containing API access keys. He realized his mistake within 5 minutes and removed those keys. However, that was enough time for automated bots to find his keys, access his account, spin up computing resources for crypto-mining and result in a $2,300 bill.
- js published an npm code package in their code release containing access keys to their S3 storage buckets.
And the list goes on and on…
The problem is so widespread that Amazon even has a dedicated support page to tell developers what to do if they inadvertently expose their access keys.
How You Can Protect Yourself
One of the main drivers of cloud migration is the agility and flexibility that it offers organizations to speed-up roll-out of new services and reduce time-to-market. However, this agility and flexibility frequently comes at a cost to security. In the name of expediency and consumer demand, developers and DevOps may sometimes not take the necessary precautions to secure their environments or access credentials.
Such exposure can happen in a multitude of ways, including accidental exposure of scripts (such as uploading to GitHub), misconfiguration of cloud resources which contain such keys , compromise of 3rd party partners who have such credentials, exposure through client-side code which contains keys, targeted spear-phishing attacks against DevOps staff, and more.
Nonetheless, there are a number of key steps you can take to secure your cloud environment against such breaches:
Assume your credentials are exposed. There’s no way around this: Securing your credentials, as much as possible, is paramount. However, since credentials can leak in a number of ways, and from a multitude of sources, you should therefore assume your credentials are already exposed, or can become exposed in the future. Adopting this mindset will help you channel your efforts not (just) to limiting this exposure to begin with, but to how to limit the damage caused to your organization should this exposure occur.
Limit Permissions. As I pointed out earlier, one of the key benefits of migrating to the cloud is the agility and flexibility that cloud environments provide when it comes to deploying computing resources. However, this agility and flexibility frequently comes at a cost to security. Once such example is granting promiscuous permissions to users who shouldn’t have them. In the name of expediency, administrators frequently grant blanket permissions to users, so as to remove any hindrance to operations.
The problem, however, is that most users never use most of the permissions they have granted, and probably don’t need them in the first place. This leads to a gaping security hole, since if any one of those users (or their access keys) should become compromised, attackers will be able to exploit those permissions to do significant damage. Therefore, limiting those permissions, according to the principle of least privileges, will greatly help to limit potential damage if (and when) such exposure occurs.
Early Detection is Critical. The final step is to implement measures which actively monitor user activity for any potentially malicious behavior. Such malicious behavior can be first-time API usage, access from unusual locations, access at unusual times, suspicious communication patterns, exposure of private assets to the world, and more. Implementing detection measures which look for such malicious behavior indicators, correlate them, and alert against potentially malicious activity will help ensure that hackers are discovered promptly, before they can do any significant damage.
Read “Radware’s 2018 Web Application Security Report” to learn more.
WHAT DO BANKS AND CYBERSECURITY HAVE IN COMMON? EVERYTHING
The world we live in can be a dangerous place, both physically and digitally. Our growing reliance on the Internet, technology and digitalization only makes our dependence on
technology more perilous. As an executive, you’re facing pressure both internally (from customers and shareholders) and externally (from industry compliance or government regulations) to keep your organization’s digital assets and your customers’ secure.
New cybersecurity threats require new solutions. New solutions require a project to implement them. The problems and solutions seem infinite while budgets remain bounded. Therefore, the challenge becomes how to identify the priority threats, select the solutions that deliver the best ROI and stretch dollars to maximize your organization’s protection. Consultants and industry analysts can help, but they too can be costly options that don’t always provide the correct advice.
So how best to simplify the decision-making process? Use an analogy. Consider that every cybersecurity solution has a counterpart in the physical world. To illustrate this point, consider the security measures at banks. They make a perfect analogy, because banks are just like applications or computing environments; both contain valuables that criminals are eager to steal.
The first line of defense at a bank is the front door, which is designed to allow people to enter and leave while providing a first layer of defense against thieves. Network firewalls fulfill the same role within the realm of cyber security. They allow specific types of traffic to enter an organization’s network but block mischievous visitors from entering. While firewalls are an effective first line of defense, they’re not impervious. Just like surreptitious robbers such as Billy the Kid or John Dillinger, SSL/TLS-based encrypted attacks or nefarious malware can sneak through this digital “front door” via a standard port.
Past the entrance there is often a security guard, which serves as an IPS or anti-malware device. This “security guard,” which is typically anti-malware and/or heuristic-based IPS function, seeks to identify unusual behavior or other indicators that trouble has entered the bank, such as somebody wearing a ski mask or perhaps carrying a concealed weapon.
Once the hacker gets past these perimeter security measures, they find themselves at the presentation layer of the application, or in the case of a bank, the teller. There is security here as well. Firstly, authentication (do you have an account) and second, two-factor authentication (an ATM card/security pin). IPS and anti-malware devices work in
concert with SIEM management solutions to serve as security cameras, performing additional security checks. Just like a bank leveraging the FBI’s Most Wanted List, these solutions leverage crowd sourcing and big-data analytics to analyze data from a massive global community and identify bank-robbing malware in advance.
THE EXECUTIVE GUIDE TO DEMYSTIFYING CYBERSECURITY
A robber will often demand access to the bank’s vault. In the realm of IT, this is the database, where valuable information such as passwords, credit card or financial transaction information or healthcare data is stored. There are several ways of protecting this data, or at the very least, monitoring it. Encryption and database
application monitoring solutions are the most common.
ADAPTING FOR THE FUTURE: DDOS MITIGATION
To understand how and why cybersecurity models will have to adapt to meet future threats, let’s outline three obstacles they’ll have to overcome in the near future: advanced DDoS mitigation, encrypted cyberattacks, and DevOps and agile software development.
A DDoS attack is any cyberattack that compromises a company’s website or network and impairs the organization’s ability to conduct business. Take an e-commerce business for example. If somebody wanted to prevent the organization from conducting business, it’s not necessary to hack the website but simply to make it difficult for visitors to access it.
Leveraging the bank analogy, this is why banks and financial institutions leverage multiple layers of security: it provides an integrated, redundant defense designed to meet a multitude of potential situations in the unlikely event a bank is robbed. This also includes the ability to quickly and effectively communicate with law enforcement.
In the world of cyber security, multi-layered defense is also essential. Why? Because preparing for “common” DDoS attacks is no longer enough. With the growing online availability of attack tools and services, the pool of possible attacks is larger than ever. This is why hybrid protection, which combines both on-premise and cloudbased
mitigation services, is critical.
Why are there two systems when it comes to cyber security? Because it offers the best of both worlds. When a DDoS solution is deployed on-premise, organizations benefit from an immediate and automatic attack detection and mitigation solution. Within a few seconds from the initiation of a cyber-assault, the online services are well protected and the attack is mitigated. However, on-premise DDoS solution cannot handle volumetric network floods that saturate the Internet pipe. These attacks must be mitigated from the cloud.
Hybrid DDoS protection aspire to offer best-of-breed attack mitigation by combining on-premise and cloud mitigation into a single, integrated solution. The hybrid solution chooses the right mitigation location and technique based on attack characteristics. In the hybrid solution, attack detection and mitigation starts immediately and automatically using the on-premise attack mitigation device. This stops various attacks from diminishing the availability of the online services. All attacks are mitigated on-premise, unless they threaten to block the Internet pipe of the organization. In case of pipe saturation, the hybrid solution activates cloud mitigation and the traffic is diverted to the cloud, where it is scrubbed before being sent back to the enterprise. An ideal hybrid solution also shares essential information about the attack between on-premise mitigation devices and cloud devices to accelerate and enhance the mitigation of the attack once it reaches the cloud.
INSPECTING ENCRYPTED DATA
Companies have been encrypting data for well over 20 years. Today, over 50% of Internet traffic is encrypted. SSL/TLS encryption is still the most effective way to protect data as it ties the encryption to both the source and destination. This is a double-edged sword however. Hackers are now leveraging encryption to create new,
stealthy attack vectors for malware infection and data exfiltration. In essence, they’re a wolf in sheep’s clothing.
To stop hackers from leveraging SSL/TLS-based cyberattacks, organizations require computing resources; resources to inspect communications to ensure they’re not infected with malicious malware. These increasing resource requirements make it challenging for anything but purpose built hardware to conduct inspection.
The equivalent in the banking world is twofold. If somebody were to enter wearing a ski mask, that person probably wouldn’t be allowed to conduct a transaction, or secondly, there can be additional security checks when somebody enters a bank and requests a large or unique withdrawal.
DEALING WITH DEVOPS AND AGILE SOFTWARE DEVELOPMENT
Lastly, how do we ensure that, as applications become more complex, they don’t become increasingly vulnerable either from coding errors or from newly deployed functionality associated with DevOps or agile development practices? The problem is most cybersecurity solutions focus on stopping existing threats. To use our bank analogy again, existing security solutions mean that (ideally), a career criminal can’t enter a bank, someone carrying a concealed weapon is stopped or somebody acting suspiciously is blocked from making a transaction. However, nothing stops somebody with no criminal background or conducting no suspicious activity from entering the bank. The bank’s security systems must be updated to look for other “indicators” that this person could represent a threat.
In the world of cybersecurity, the key is implementing a web application firewall that adapts to evolving threats and applications. A WAF accomplishes this by automatically detecting and protecting new web applications as they are added to the network via automatic policy generation.
It should also differentiate between false positives and false negatives. Why? Because just like a bank, web applications are being accessed both by desired legitimate users and undesired attackers (malignant users whose goal is to harm the application and/or steal data). One of the biggest challenges in protecting web applications is the ability to accurately differentiate between the two and identify and block security threats while not disturbing legitimate traffic.
ADAPTABILITY IS THE NAME OF THE GAME
The world we live in can be a dangerous place, both physically and digitally. Threats are constantly changing, forcing both financial institutions and organizations to adapt their security solutions and processes. When contemplating the next steps, consider the following:
- Use common sense and logic. The marketplace is saturated with offerings. Understand how a cybersecurity solution will fit into your existing infrastructure and the business value it will bring by keeping your organization up and running and your customer’s data secure.
- Understand the long-term TCO of any cyber security solution you purchase.
- The world is changing. Ensure that any cyber security solution you implement is designed to adapt to the constantly evolving threat landscape and your organization’s operational needs.
Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.
In part one of this blog series we discussed how there is oftentimes a lack of knowledge when it comes to infrastructure technology and knowhow in the relevant DevOps teams. This is not what was intended when “Agile” moved from being a pure development approach to a whole technology management methodology, but it is where we find ourselves. One of the consequences we face because of this is that the traditional user of many technologies, the developers/application owners, know what functionality they should have but not where to get it.
One of the biggest challenges we continue to see in the evolving cloud and DevOps world is around security and standards in general.
The general lack of accumulated infrastructure knowledge coupled with the enthusiasm with which DevOps teams like to experiment is causing significant challenges for corporations in terms of standardization. This is leading to two primary symptoms:
The success of an online business depends in large part on the user experience. After all, competitors are only a single click away. There is a broad spectrum of services that impact user experience from an infrastructure and application perspective. Think about page load times, availability, and feature richness. Agility in the delivery infrastructure and continuous delivery of applications have become ubiquitous to the success of an online business. Hyper scale cloud providers such as Google, Amazon, eBay, and Netflix have been leading on highly scalable, agile infrastructure and continuous delivery of applications and are considered the golden standards for the practice of online business.