Throughout the past four blogs in this series we have harkened back to some of the same key points. Today, we will recap.
In the past 20 years I have often found myself looking for just the right technology tool to solve a specific problem. Most often I have been able to find something to fit the bill, but not the sought after, best-of-breed solution that promises to solve all the world’s problems. Sometimes I wonder if it would be possible to find the right combination of tools to create world peace, end world hunger or end global warming. Obviously I’m being facetious, but the problem is this approach of finding specific tools to solve symptomatic problems remains the same as it did twenty years ago. By the time you assemble all the disparate tools to create Utopia, you realize you actually have Frankenstein. Too many tools, like too many cooks in the kitchen, can make success untenable with too much to set up, too much complexity, a non-global view and oftentimes fractured ownership. Not to mention that this approach oftentimes makes solving the root cause of the problem difficult.
In part one of this blog series we discussed how there is oftentimes a lack of knowledge when it comes to infrastructure technology and knowhow in the relevant DevOps teams. This is not what was intended when “Agile” moved from being a pure development approach to a whole technology management methodology, but it is where we find ourselves. One of the consequences we face because of this is that the traditional user of many technologies, the developers/application owners, know what functionality they should have but not where to get it.
Today more than ever, the success or failure of our digital enterprise rests on whether our customer has a good user experience. No one wants to use something that is difficult to use or unreliable, and most of us don’t want to use something unless the user experience is consistent. All too often, organizations expend all their energy into making their tool /application look good, be easy to use or making it have great functionality. What they forget is that performance, especially consistent performance, can be just as important. All these things rolled into one are what I call the convenience factors. It’s not a new concept and many brick-and-mortar companies have failed over the years because of this. If we go back a few years, we can see many examples of technology or companies succumbing from an original position of strength because they never took this perceived convenience/quality factor into account. Three examples:
One of the biggest challenges we continue to see in the evolving cloud and DevOps world is around security and standards in general.
The general lack of accumulated infrastructure knowledge coupled with the enthusiasm with which DevOps teams like to experiment is causing significant challenges for corporations in terms of standardization. This is leading to two primary symptoms:
Several years ago, the monolithic approach to application development fell out of vogue because time to market became the key success metric in our ever-changing world. Agile development started to become the norm and the move to DevOps was born. At the same time as this change was taking place, there was another ground breaking development: the advent of public clouds. Either change by itself was industry -impacting but the two happening at the same time, both enabling each other, changed everything.
It’s funny although sometimes the first way we do something might be the right way, we try to improve it to make it look shinier. Eventually we realize that the most obvious answer was actually the right answer, our original tactic.
How do we build a truly resilient security framework directly incorporating micro segmentation into the SCADA systems and our network in order to protect it, when we can’t add security controls for fear of the business consequences?
I think the solution is quite obvious on the surface: change the dynamic that has existed within our communication-centric IT world since the inception of ARPANET. What do I mean?
The world is changing; it always has but the world is changing faster now than it ever has before. This general change is translating into even bigger changes in the cyber world. Some of the key areas that are evolving aren’t new, like availability or security. Others like automation are maturing quickly, and then there is the ever-present need for “easy.” Easy is a nebulous term, but in this case it refers to ease of procurement, ease of set up, flexibility in platform and ease of ongoing management.
This accelerated change is being driven by different market and business drivers. Some of the key market drivers are compliance, time to market, cyber loss risk, and increased competition around the user experience. This change is acutely felt in the ADC space.
In the year 1453, the Ottoman Empire under Sultan Mehmed II was able to accomplish what none before them had ever been able to achieve. For more than a millennium, Byzantium had remained a bastion of the Orthodox faith, the great kingdom of the East. The hordes and barbarians that had caused the downfall of so many other empires had been unable to conquer this unconquerable city. Until one day when it all changed.