Automation is an interesting conversation that I have had with many of our customers lately, especially with cloud migration in play and DevOps processes with continuous development becoming prevalent.
Management and monitoring in Software Defined Data Centers (SDDC) benefit from automation principles, programmability, API and policy-driven provisioning of application environments through self-service templates. These best practices help application owners to define, manage and monitor their own environments, while benefiting from the performance, security, business continuity and monitoring infrastructure from the IT teams. SDDC also changes the way IT designs and thinks about infrastructure – the goal is to adapt to demands of continuous delivery needs of application owners in a “cloudy” world.
Last week I met with a very large enterprise in finance that has adopted provisioning on demand. They spin up applications on demand, having virtualized most of their infrastructure and have developed tools to automate the provisioning of applications and servers for customers and internal application developers through self-service applications.
Many organizations, such as Netflix and Amazon, are using microservice architecture to implement business applications as a collection of loosely coupled services. Some of the reasons to move to this distributed, loosely coupled architecture is to enable hyperscale, and continuous delivery for complex applications, among other things. Teams in these organizations have adopted Agile and DevOps practices to deliver applications quickly and to deploy them with a lower failure rate than traditional approaches. However, you have to balance the complexity that comes with a distributed architecture with the application needs, scale requirements and time-to-market constraints.
If you have signed into Gmail and noticed that you were also able to access Google portfolio apps such as Google Maps, YouTube, Google Play, Google Photos and other Google applications, you are already using SSO! The user logs in once to a Google account, and has access to other Google applications.
Many of us are familiar with Secure Hypertext Transfer Protocol (HTTPS) that uses a cryptographic protocol commonly referred to as Transport Layer Security (TLS) to secure our communication on the Internet. In simple terms, there are two keys, one available to everyone via a certificate, called a public key and the other available to the recipient of the communication, called a private key. When you want to send encrypted communication to someone, you use the receiver’s public key to secure that communication channel. Once secured, this communication can only be decrypted by the recipient who has the private key.
According to a recent report from IDC, by 2020, almost half of IT infrastructure spend will be on cloud IT infrastructure.
Just as cloud computing means different things to different people, so does the term Service Provider (SP). For the purpose of this blog, I include Cloud Service Providers (CSP), Hosting providers (colocation and managed) as well as Telcos in the SP category.
Many organizations have a guidance to cut IT spending while rolling out secure application services in a continuous delivery model. Many R&D teams in these organizations have adopted Agile and DevOps practices to enable faster delivery. The goal of Agile and DevOps practices is to deliver applications quicker and to deploy them with a lower failure rate than traditional approaches.
It has been a while since Cisco announced end-of-life for its Application Control Engine (ACE) products. The last date of support, January 31, 2019, is fast approaching. If you rely on ACE for load balancing in your environment, it is time to migrate and look to the future.