Last week I met with a very large enterprise in finance that has adopted provisioning on demand. They spin up applications on demand, having virtualized most of their infrastructure and have developed tools to automate the provisioning of applications and servers for customers and internal application developers through self-service applications.
Many organizations, such as Netflix and Amazon, are using microservice architecture to implement business applications as a collection of loosely coupled services. Some of the reasons to move to this distributed, loosely coupled architecture is to enable hyperscale, and continuous delivery for complex applications, among other things. Teams in these organizations have adopted Agile and DevOps practices to deliver applications quickly and to deploy them with a lower failure rate than traditional approaches. However, you have to balance the complexity that comes with a distributed architecture with the application needs, scale requirements and time-to-market constraints.
If you have signed into Gmail and noticed that you were also able to access Google portfolio apps such as Google Maps, YouTube, Google Play, Google Photos and other Google applications, you are already using SSO! The user logs in once to a Google account, and has access to other Google applications.
Many of us are familiar with Secure Hypertext Transfer Protocol (HTTPS) that uses a cryptographic protocol commonly referred to as Transport Layer Security (TLS) to secure our communication on the Internet. In simple terms, there are two keys, one available to everyone via a certificate, called a public key and the other available to the recipient of the communication, called a private key. When you want to send encrypted communication to someone, you use the receiver’s public key to secure that communication channel. Once secured, this communication can only be decrypted by the recipient who has the private key.
According to a recent report from IDC, by 2020, almost half of IT infrastructure spend will be on cloud IT infrastructure.
Just as cloud computing means different things to different people, so does the term Service Provider (SP). For the purpose of this blog, I include Cloud Service Providers (CSP), Hosting providers (colocation and managed) as well as Telcos in the SP category.
Many organizations have a guidance to cut IT spending while rolling out secure application services in a continuous delivery model. Many R&D teams in these organizations have adopted Agile and DevOps practices to enable faster delivery. The goal of Agile and DevOps practices is to deliver applications quicker and to deploy them with a lower failure rate than traditional approaches.
It has been a while since Cisco announced end-of-life for its Application Control Engine (ACE) products. The last date of support, January 31, 2019, is fast approaching. If you rely on ACE for load balancing in your environment, it is time to migrate and look to the future.
I recently met with a regional cloud service provider (CSP) that has adopted provisioning on demand as their IT model. They spin up applications on demand, having virtualized most of their infrastructure and have developed tools to automate the provisioning of applications and servers for customers/tenants through a self-service portal. Rather than build-out and manage more and more physical infrastructure, with associated time and expense, the CSP is adopting the concepts of software-defined data center (SDDC) that builds on virtualization (of software, networking and storage) to offer software and network services for many different clients. More importantly, the CSP is also catering to needs of DevOps and IT architects – both internally, and externally for its tenants/clients by enabling true self service through automation.
Hypertext Transfer Protocol (HTTP) is the protocol used primarily for communication between the user’s browser and the websites that users are accessing. Introduced in 1991, with a major revision in 1999 to HTTP 1.1, HTTP protocol has many limitations. In 2009, engineers at Google redesigned the protocol in a research project called SPDY (pronounced “speedy”) to address some of HTTP 1.1 limitations.
Websites in the early 90’s when HTTP was introduced were markedly different from today’s websites. In February 2015 the Internet Engineering Task Force (IETF) introduced a new version, HTTP/2, to keep up with the evolution that internet has undergone since the early 90’s.