If you’re attending this week’s AWS re:Invent, check out our session on Alteon VA for AWS at AWS Marketplace – Booth 228 at 2:15 pm on Thursday, November 13, 2014. At the session, we’ll discuss and demonstrate Alteon VA for AWS.
Jim Frey is Vice President of Research, Network Management for Enterprise Management Associates (EMA) and is a featured guest blogger.
The steady moves toward internal/external cloud computing, virtualization, more complex web applications, BYOD, the App economy and new strategies for dealing with cyber attacks are bringing disruptive change to IT. These changes are mostly for the good, but along the way that have created a litany of new pain points and challenges.
Last week I had the pleasure of co-hosting an InformationWeek webinar with Jim Metzler, a distinguished Research Fellow and also the Co-Founder at Ashton Metzler & Associates – whom I’ve known for years from various industry events and conferences. The webinar had over 400 participants and discussed why ensuring the SLA of websites and internal business-critical applications is extremely important to business functions and IT organizations. We also spoke about how Radware solutions can be utilized to deliver, monitor and manage application SLA, as well as drive web performance optimization, even during a cyber attack.
Effectively a next generation load balancer, enterprises are deploying Application Delivery Controllers (ADCs) to front-end their mission critical applications. The enterprise ADC market is mature with well established players and solutions. Yet when moving applications to the cloud – it’s a completely different playground. The business need is to support a new application life cycle—one that allows the business to scale across hybrid cloud environment.
In this post I will explore an application life cycle use case across hybrid cloud, and how to properly deploy an ADC in the cloud to support the application life cycle.
A couple of weeks ago I returned from a business trip to Korea and China where I met with a number of customers and partners from various vertical markets. The most interesting thing about this trip was that these two countries continue to encounter huge increases in their Internet traffic due to steady, on-going online business expansion. The customers I met with included an online payment services company, various leading mobile carriers, a large bank and a cable network operator among others. What I found common to these companies is that almost all of them are considering their next-generation strategy with regards to their data center technologies and operations. And while each customer obviously has a different environment with distinct applications and infrastructures it was clear that they are all keen on delivering one thing – an optimized Web experience.
As the industry and the media keep feeding the Software-Defined Networking (SDN) hype and vendors introduce SDN products into the market – it is becoming increasingly important to understand the difference between various offerings as well as the ways in which they can help end users.
The majority of the discussion has centered on changes in forwarding functionality – the functionality of forwarding packets between interconnection ports of networking devices. With OpenFlow (which is not SDN, but is what triggered a lot of the SDN discussion) the intelligence making the forwarding decisions, which lies in the control plane, has moved out of the forwarding platforms that connect the external systems into the network and into the controllers. Here we have decoupled OpenFlow switches and controllers.
The devastation wrought by superstorm Sandy is a stark reminder of just how fragile our environment is versus the power of nature. In just a couple of days, a single storm disrupted the lives of so many people and paralyzed a large number of businesses even days after it passed. In the wake of the storm, one of the questions on my mind is, how can we help businesses remain functional after such a massive hit regardless of their size?
Working for an Application Delivery Controller company, I’m no stranger to disaster recovery. Most often, disaster recovery is initiated by large enterprises that invest big money to build entire backup datacenters with the ability to automatically provide all online services in case their main datacenters become unavailable. A reality faced by many businesses in and around New York after the storm.
About a month ago, I wrote a post on cloud load balancing versus application delivery controllers. In that post, I explored the core differences between cloud-managed load balancing and self-managed commercial load balancing, using an application delivery controller virtual appliance running over cloud infrastructure. In part two of this series, I take a closer look at some of the themes laid out in my earlier post with an emphasis on the role application delivery controllers play in addressing the challenges associated with migrating legacy applications to a general purpose cloud infrastructure.
Over the past 2 years, Radware has offered virtualized ADC appliances with virtual ADC instances that can be used on hardware appliances or on general-purpose servers. During this time, we’ve noticed two schools of thought emerging on the all-important question of vADC density. The first, argues that vADC density is one of the key criteria to consider when evaluating a virtualized appliance. The second, however, claims that organizations will typically avoid utilizing density higher than 10-16 vADC on a single piece of hardware. In search of greater clarity, we went back after implementing hundreds of ADC consolidation and virtualization projects with thousands of vADCs to check and see if there were any identifiable trends in vADC density deployment.