In part one of this blog series we discussed how there is oftentimes a lack of knowledge when it comes to infrastructure technology and knowhow in the relevant DevOps teams. This is not what was intended when “Agile” moved from being a pure development approach to a whole technology management methodology, but it is where we find ourselves. One of the consequences we face because of this is that the traditional user of many technologies, the developers/application owners, know what functionality they should have but not where to get it.
Today more than ever, the success or failure of our digital enterprise rests on whether our customer has a good user experience. No one wants to use something that is difficult to use or unreliable, and most of us don’t want to use something unless the user experience is consistent. All too often, organizations expend all their energy into making their tool /application look good, be easy to use or making it have great functionality. What they forget is that performance, especially consistent performance, can be just as important. All these things rolled into one are what I call the convenience factors. It’s not a new concept and many brick-and-mortar companies have failed over the years because of this. If we go back a few years, we can see many examples of technology or companies succumbing from an original position of strength because they never took this perceived convenience/quality factor into account. Three examples:
One of the biggest challenges we continue to see in the evolving cloud and DevOps world is around security and standards in general.
The general lack of accumulated infrastructure knowledge coupled with the enthusiasm with which DevOps teams like to experiment is causing significant challenges for corporations in terms of standardization. This is leading to two primary symptoms:
Several years ago, the monolithic approach to application development fell out of vogue because time to market became the key success metric in our ever-changing world. Agile development started to become the norm and the move to DevOps was born. At the same time as this change was taking place, there was another ground breaking development: the advent of public clouds. Either change by itself was industry -impacting but the two happening at the same time, both enabling each other, changed everything.
It’s funny although sometimes the first way we do something might be the right way, we try to improve it to make it look shinier. Eventually we realize that the most obvious answer was actually the right answer, our original tactic.
How do we build a truly resilient security framework directly incorporating micro segmentation into the SCADA systems and our network in order to protect it, when we can’t add security controls for fear of the business consequences?
I think the solution is quite obvious on the surface: change the dynamic that has existed within our communication-centric IT world since the inception of ARPANET. What do I mean?
The world is changing; it always has but the world is changing faster now than it ever has before. This general change is translating into even bigger changes in the cyber world. Some of the key areas that are evolving aren’t new, like availability or security. Others like automation are maturing quickly, and then there is the ever-present need for “easy.” Easy is a nebulous term, but in this case it refers to ease of procurement, ease of set up, flexibility in platform and ease of ongoing management.
This accelerated change is being driven by different market and business drivers. Some of the key market drivers are compliance, time to market, cyber loss risk, and increased competition around the user experience. This change is acutely felt in the ADC space.
In the year 1453, the Ottoman Empire under Sultan Mehmed II was able to accomplish what none before them had ever been able to achieve. For more than a millennium, Byzantium had remained a bastion of the Orthodox faith, the great kingdom of the East. The hordes and barbarians that had caused the downfall of so many other empires had been unable to conquer this unconquerable city. Until one day when it all changed.
I remember when I first learned about Web application firewall technology. It seemed like magic to me: A device that could compensate for bad coding or unexpected/unintended web application functionality. It could do this by learning expected application behavior and then enforcing said behavior, even if the application itself was capable of allowing the unwanted behavior. The business case for such a technology is easily recognizable even more so today than it was in the mid- to early 2000’s when it first came out: the ability to have a device compensate for human error.
In this blog post we will cover the basics of building a truly resilient network where throughput isn’t always important, but reliability and redundancy are. We will look at this from the operators’ stand point.