Several years ago, the monolithic approach to application development fell out of vogue because time to market became the key success metric in our ever-changing world. Agile development started to become the norm and the move to DevOps was born. At the same time as this change was taking place, there was another ground breaking development: the advent of public clouds. Either change by itself was industry -impacting but the two happening at the same time, both enabling each other, changed everything.
Today we find ourselves in a world where the most important paradigm in IT is often speed and agility. To this end, DevOps was born from agile development. It was meant to increase agility by incorporating operations/infrastructure knowhow into small nimble development teams, hence the ‘Ops’ in the term DevOps. In reality, what it meant was the elimination of the traditional infrastructure people (and knowledge) from the production/environment lifecycle. This was further enabled by the advent of the public cloud. Interestingly this seemingly harmless change (all in the pursuit of agility) has had some unintended consequences, primarily due to how this “DevOps” paradigm has usually been rolled out. The lack of practical knowledge around existing infrastructure solutions and products has led to a disaggregation in agility. It is oftentimes the real-world case for less haste and more speed.
In the following (5-part) blog series we will review some practical steps we can take, tools we can use and functionality we should be insisting on in order to best realize our corporate goals: Reduced time to market, improved operational efficiency, reduced complexity and cost savings where practical. We will address these challenges from the developer/DevOps point of view first. We will then address the same challenges from the traditional IT infrastructure leaders’ point of view as well. Too often these two groups find their goals in opposition to one another, but that doesn’t have to be the case. Things like agility, flexibility and time to market can exist in conjunction with things like standards, centralized control and auditing.
What we’ll cover:
Part 1: Choosing the right cloud. Often we default to the big two, but maybe we shouldn’t.
Part 2: Security by Proxy or to complicate.
Part 3: Re-aggregating tools back to the bigger, better solutions we used to have. This will simplify cloud deployments, limit product sprawl and reduce budgets.
Part 4: Use tools to save some time (they do exist).
Part 5: Recapping it all.
Today we will cover the most basic decision, one where we often default to the big two: which cloud. As they say at art school, choose your canvas carefully. Private data centers, and pretty much every public cloud offering all have different advantages. These advantages can even be applied to the same project at different stages of the life cycle. Many of my friends’ colleagues and customers have started using smaller custom public clouds for testing and ‘Dev’ because they offer a strong ecosystem, that include monitoring and testing tools that simplify the operational side of development. Most of those same “friends “go to production in one of the larger public clouds because of the friction-less procurement and ease with which scale can be achieved. Below is a list that includes some of the key questions to consider when choosing the cloud that’s right for you.
- How big will your environment be (in some cases you can start off for free)?
- How big will this environment be? Consider the cost increase ratios.
- What amount of technical support do you need (smaller clouds tend to offer more customized support services)?
- Is a vertical cloud worth considering?
- Does the cloud offer the correct architecture for you SAAS vs PAAS vs IAAS, etc.?
- Are your DevOps teams used to a marketplace type of environment?
- Can you provide your own frictionless product procurement environment? What tools are freely available?
- Does the cloud provide redundancy and backup (and what level)?
- Does the cloud provide value-added services?
- Is the cloud security-certified (SAS 70 SAE, SSAE 16 or SOC 2)?
- Do they offer cyber insurance? Should you use the cloud’s built-in functionality/branded products? If you do, will it make you less nimble by making locking you into a proprietary-type environment?
- How do you get your data out of the cloud if you need to move?
- If you are in Europe and your data needs to remain geographically-bound, can they accommodate?
Please remember to use your logic chip when choosing the cloud. Remember, people can say anything. I was once working in the security architecture group at a Fortune 3 company when a very large cloud provider proceeded to tell us that our data would be more secure in their cloud. They gave us several reasons.
First, why we would find it hard to secure our data ourselves (mostly true):
- Our core business was not technology
- Our industry made us a likely target for attack
- We were using several partners to help secure some parts of a very large, geographically-dispersed environment. “Look how much money you will save!”
Second, why we should choose them (the points are partially true, but the conclusion hogwash):
1) This cloud provider had built inherent security into their cloud (Translation: They considered ACLs a firewall, didn’t need DDOS protection because they had unlimited bandwidth, had some logs they couldn’t share etc.). On the other hand, you could and still can buy many third-party tools, but due to the underlying cloud architecture one would have to buy far more of these tools in order to claim the same level of a security as in a private data center. The reason? Think something like VPC-size limitations (you can’t have one firewall for the whole cloud environment, you must have on per VPC).
2) As a Fortune 3, the organization is a target, but multiple three-letter agencies and their largest competitors use us(all true, but if I’m a target and you have all of them and maybe me, who’s the target?).
3) You can get rid of so many tools and skill sets – think of the cost savings (definitely not true; cost saving for anything other than a small company does not exist in the cloud). Senior IT leadership may think it does, but the mid-tier DevOps leaders know it doesn’t. That’s not why you are going to the cloud, you are going for frictionless growth/scale, agility and the ability to change capital costs to operational expenditures. The whole premise of cloud is unlimited instantaneous resources, the exact opposite of your private data center which was maximizing limited resources.
Once you have chosen your canvas (taking the above questions into account, while remembering sales people usually tell the truth but often bear false witness), it’s time to focus on whether it is possible to keep some standards and secure your data in the cloud, where you don’t always own the underlying architecture and/or may be restrained by underlying architectural constructs. We will cover more about that in the next exciting installment.
Read “Keep It Simple; Make It Scalable: 6 Characteristics of the Futureproof Load Balancer” to learn more.
Daniel Lakier is VP-ADC globally for Radware. Daniel has been in the greater technology industry for over 20 years. During that time he has worked in multiple verticals including the energy, manufacturing and healthcare sectors. Daniel enjoys new challenges and as such has enjoyed several different roles in his Career from hands on engineering to architecture and Sales. At heart Daniel is a teacher and a student. He is forever learning and truly has passion for sharing his knowledge. Most recently Daniel left his role as President and CTO of a leading technology integrator where he had spent the better part of 8 years to join the Radware organization. When Daniel isn't at the office he enjoys working on the farm and chasing his wonderful daughters.