main

Application DeliverySecuritySSL

Adopt TLS 1.3 – Kill Two Birds with One Stone

September 13, 2018 — by Prakash Sinha14

tls_1.3_ssl_blog_img-960x600.jpg

Transport Layer Security (TLS) version 1.3 provides significant business benefits by making applications more secure, improving performance and reducing latency for the client. Changes in how handshake between client and server is designed has decreased site latency – utilizing a faster handshake, and use of Elliptic Curve (EC) based ciphers that allow faster page load time. TLS 1.3 also enforces forward security to prevent a replay of all recorded data if private session keys are compromised.

Transport Level Security – A Quick Recap

Transport Layer Security (TLS) version 1.0, the first standardized version of SSL introduced in 1999, which is based on SSL v3.0. TLS 1.0 is obsolete and vulnerable to various security issues, such as downgrade attacks. Payment Card Industry (PCI) had set a migration deadline of June 30, 2018 to migrate to TLS 1.1 or higher.

TLS 1.1, introduced in 2006, is more secure than TLS 1.0 and protected against certain types of Cipher Block Chaining (CBC) attacks such as BEAST. Some TLS 1.1 implementations are vulnerable to POODLE, a form of downgrade attack. TLS 1.1 also removed certain ciphers such as DES, and RC2 which are vulnerable and broken and introduced support for Forward Secrecy, although it is performance intensive.

TLS 1.2, introduced in 2008, added SHA256 as a hash algorithm and replaced SHA-1, which is considered insecure. It also added support for Advanced Encryption Standard (AES) cipher suites, Elliptic Curve Cryptography (ECC), and Perfect Forward Secrecy (PFS) without a significant performance hit. TLS 1.2 also removed the ability to downgrade to SSL v2.0 (highly insecure and broken).

Why TLS 1.3?

TLS 1.3 is now an approved standard of the Internet Engineering Task Force (IETF).  Sites utilizing TLS 1.3 can expect faster user connections than with earlier TLS standards while making the connections more secure due to the elimination of obsolete and less secure ciphers, server dictating the session security and faster establishment of handshake between client and server. TLS 1.3 eliminates the negotiation on the encryption to use. Instead, in the initial connection the server provides an encryption key, the client provides a session key, and then the connection is made. However, if needed TLS 1.3 provides a secure means to fall back to TLS 1.2 if TLS 1.3 is not supported by the endpoint.

[You might also like: High-Performance Visibility into SSL/TLS Traffic]

TLS 1.3 – Recommendations

To achieve SSL/TLS acceleration and effectively address the growing number and complexity of encrypted web attacks, organizations face serious strategic challenges. We recommend migration to TLS 1.3 to take advantage of significant business benefits and security that the newer standard provides. However, as with any transition to a new standard, be mindful of the adoption risks.

Evaluate the Risks and Plan Migration

The risks may be incompatibility between client and server due to poor implementations and bugs. You may also need to carefully evaluate the impact on devices that implement inspection based on RSA static keys, products that protect against data leaks or implement out of path web application protection based on a copy of decrypted traffic.

  • Adopt a gradual deployment of TLS 1.3 – A crawl-walk-run approach of deploying in QA environments, test sites, and low traffic sites
  • Evaluate or query the “middle box” vendors for compatibility with TLS 1.3, currently, only active TLS 1.3 terminators can provide compatibility
  • Utilize Application Delivery Controllers (ADCs) to terminate TLS 1.3 and front-end servers that are not capable of supporting TLS 1.3

TLS 1.3 provides improved security, forward security to secure data even if private keys are compromised, improved latency and better performance.

Read “2017-2018 Global Application & Network Security Report” to learn more.

Download Now

Application DeliveryApplication SecuritySecurity

DDoS Protection is the Foundation for Application, Site and Data Availability

September 11, 2018 — by Daniel Lakier2

ddos-primer-part-1-960x788.jpg

When we think of DDoS protection, we often think about how to keep our website up and running. While searching for a security solution, you’ll find several options that are similar on the surface. The main difference is whether your organization requires a cloud, on-premise or hybrid solution that combines the best of both worlds. Finding a DDoS mitigation/protection solution seems simple, but there are several things to consider.

[You might also like: Should Business Risk Mitigation Be A Factor When We Choose Our Suppliers and Manufacturers?]

It’s important to remember that DDoS attacks don’t just cause a website to go down. While the majority do cause a service disruption, 90 percent of the time it does not mean a website is completely unavailable, but rather there is a performance degradation. As a result, organizations need to search for a DDoS solution that can optimize application performance and protect from DDoS attacks. The two functions are natural bedfellows.

The other thing we often forget is that most traditional DDoS solutions, whether they are on-premise or in the cloud, cannot protect us from an upstream event or a downstream event.

  1. If your carrier is hit with a DDoS attack upstream, your link may be fine but your ability to do anything would be limited. You would not receive any traffic from that pipe.
  2. If your infrastructure provider goes down due to a DDoS attack on its key infrastructure, your organization’s website will go down regardless of how well your DDoS solution is working.

Many DDoS providers will tell you these are not part of a DDoS strategy. I beg to differ.

Finding the Right DDoS Solution

DDoS protection was born out of the need to improve availability and guarantee performance.  Today, this is critical. We have become an application-driven world where digital interactions dominate. A bad experience using an app is worse for customer satisfaction and loyalty than an outage.  Most companies are moving into shared infrastructure environments—otherwise known as the “cloud”— where the performance of the underlying infrastructure is no longer controlled by the end user.

Keeping the aforementioned points in mind, here are three key features to consider when looking at modern enterprise DDoS solutions:

  1. Data center or host infrastructure rerouting capabilities gives organizations the ability to reroute traffic to secondary data centers or application servers if there is a performance problem caused by something that the traditional DDoS prevention solution cannot negate. This may or may not be caused by a traditional DDoS attack, but either way, it’s important to understand how to mitigate the risk from a denial of service caused by infrastructure failure.
  2. Simple-to-use link or host availability solutions offer a unified interface for conducting WAN failover in the event that the upstream provider is compromised. Companies can use BGP, but BGP is complex and rigid. The future needs to be simple and flexible.
  3. Infrastructure and application performance optimization is critical. If we can limit the amount of compute-per-application transactions, we can reduce the likelihood that a capacity problem with the underlying architecture can cause an outage. Instead of thinking about just avoiding performance degradation, what if we actually improve the performance SLA while also limiting risk? It’s similar to making the decision to invest your money as opposed to burying it in the ground.

[You might also like: Marrying the Business Need with the Technology Drive: Recapping It All]

Today you can look at buying separate products to accomplish these needs but you are then left with an age old problem: a disparate collection of poorly integrated best-of-breed solutions that don’t work well together.

These products should work together as part of a holistic solution where each solution can compensate and enhance the performance of the other and ultimately help improve and ensure application availability, performance and reliability. The goal should be to create a resilient architecture to prevent or limit the impact of DoS and DDoS attacks of any kind.

Read the “2018 C-Suite Perspectives: Trends in the Cyberattack Landscape, Security Threats and Business Impacts” to learn more.

Download Now

Application Delivery

Considerations for Load Balancers When Migrating Applications to the Cloud

July 31, 2018 — by Prakash Sinha10

cloud-migration-load-balancing-960x600.jpg

According to a new forecast from the International Data Corporation (IDCWorldwide Quarterly Cloud IT Infrastructure Tracker, total spending on IT infrastructure for deployment in cloud environments is expected to total $46.5 billion in 2017 with year-over-year growth of 20.9%. Public cloud data centers will account for the majority of this spending, 65.3%, growing at the fastest annual rate of 26.2%. Off-premises private cloud environments will represent 13% of cloud IT infrastructure spending, growing at 12.7% year over year. On-premises private clouds will account for 62.6% of spending on private cloud IT infrastructure and will grow 11.5% year-over-year in 2017.

Application DeliverySecurity

Should Business Risk Mitigation Be A Factor When We Choose Our Suppliers And Manufacturers?

July 24, 2018 — by Daniel Lakier7

supplier-manufacturer-960x640.jpg

This is something that I have struggled with for most of my working life. As a technology professional, it is my job to pick the best products and solutions or to dig deeper to marry that technological decision with one that’s best for my organization. Is it incumbent on me to consider my suppliers’ financials, or their country or origin, or perhaps their business practices?

This thought was thrust sharply into focus during the past few months. First, we were reminded that a sound business still needs to have sound financials. The second warning is around the ramifications of a trade war.

Application Delivery

Single Sign On (SSO) Use Cases

May 24, 2018 — by Prakash Sinha2

sso-use-cases-960x640.jpg

SSO reduces password fatigue for users having to remember a password for each application. With SSO, a user logs into one application and then is able to sign into other applications automatically, regardless of the domain the user is in in or the technology in use. SSO makes use of a federation services or login page that orchestrates the user credentials between multiple applications.

Application Delivery

Maintaining Your Data Center’s Agility and Making the Most Out of Your Investment in ADC Capacity

April 25, 2018 — by Fabio Palozza1

data-center-agility-1-960x612.jpg

Deciding on an appropriate application delivery controller (ADC) and evaluating the need for supporting infrastructure is a complex, complicated, and challenging job. Such challenges result from the fact that ADCs are increasingly used across diverse environments and virtual, cloud, and physical appliances.

NFVSDNSecurity

5G Security Challenges and Ways to Overcome Them

April 19, 2018 — by Fabio Palozza0

5G-security-960x640.jpg

As the world waits for the introduction of 5G networks, the industry gears up to address the security challenges that may accompany 5G. The 5G networks would essentially promote the use of a huge number of interconnected devices, a tremendous increase in bandwidth, and collaborative functioning of legacy and new access technologies. Undoubtedly, the upcoming 5G environment would demand the deployment of additional security mechanisms to ensure business continuity. 5G systems are meant to be service-oriented, which is why it is important to address the security challenges appropriately and to focus on instilling stronger security and privacy settings in 5G networks.

Application Delivery

An Overview of the TCP Optimization Process

April 10, 2018 — by Fabio Palozza0

tcp-optimization-960x611.jpg

The Transmission Control Protocol (TCP) drives major internet operations such as video streaming, file transfers, web browsing, and communications, accounting for very high percentages of fixed access internet traffic and even more of mobile internet traffic. Surprisingly, the TCP performance is yet to reach its full potential. Sub-optimal TCP performance translates into undesirable consequences for the communications service providers, who struggle hard to make the most out of expensive resources, to combat operational inefficiencies, and to provide a high quality of experience to the subscribers.