As the world waits for the introduction of 5G networks, the industry gears up to address the security challenges that may accompany 5G. The 5G networks would essentially promote the use of a huge number of interconnected devices, a tremendous increase in bandwidth, and collaborative functioning of legacy and new access technologies. Undoubtedly, the upcoming 5G environment would demand the deployment of additional security mechanisms to ensure business continuity. 5G systems are meant to be service-oriented, which is why it is important to address the security challenges appropriately and to focus on instilling stronger security and privacy settings in 5G networks.
The Transmission Control Protocol (TCP) drives major internet operations such as video streaming, file transfers, web browsing, and communications, accounting for very high percentages of fixed access internet traffic and even more of mobile internet traffic. Surprisingly, the TCP performance is yet to reach its full potential. Sub-optimal TCP performance translates into undesirable consequences for the communications service providers, who struggle hard to make the most out of expensive resources, to combat operational inefficiencies, and to provide a high quality of experience to the subscribers.
Throughout the past four blogs in this series we have harkened back to some of the same key points. Today, we will recap.
Automation is an interesting conversation that I have had with many of our customers lately, especially with cloud migration in play and DevOps processes with continuous development becoming prevalent.
Management and monitoring in Software Defined Data Centers (SDDC) benefit from automation principles, programmability, API and policy-driven provisioning of application environments through self-service templates. These best practices help application owners to define, manage and monitor their own environments, while benefiting from the performance, security, business continuity and monitoring infrastructure from the IT teams. SDDC also changes the way IT designs and thinks about infrastructure – the goal is to adapt to demands of continuous delivery needs of application owners in a “cloudy” world.
In the past 20 years I have often found myself looking for just the right technology tool to solve a specific problem. Most often I have been able to find something to fit the bill, but not the sought after, best-of-breed solution that promises to solve all the world’s problems. Sometimes I wonder if it would be possible to find the right combination of tools to create world peace, end world hunger or end global warming. Obviously I’m being facetious, but the problem is this approach of finding specific tools to solve symptomatic problems remains the same as it did twenty years ago. By the time you assemble all the disparate tools to create Utopia, you realize you actually have Frankenstein. Too many tools, like too many cooks in the kitchen, can make success untenable with too much to set up, too much complexity, a non-global view and oftentimes fractured ownership. Not to mention that this approach oftentimes makes solving the root cause of the problem difficult.
Last week I met with a very large enterprise in finance that has adopted provisioning on demand. They spin up applications on demand, having virtualized most of their infrastructure and have developed tools to automate the provisioning of applications and servers for customers and internal application developers through self-service applications.
In part one of this blog series we discussed how there is oftentimes a lack of knowledge when it comes to infrastructure technology and knowhow in the relevant DevOps teams. This is not what was intended when “Agile” moved from being a pure development approach to a whole technology management methodology, but it is where we find ourselves. One of the consequences we face because of this is that the traditional user of many technologies, the developers/application owners, know what functionality they should have but not where to get it.
Most businesses have multi-function printers that can fax, scan, and copy. In our roles, we are multi-functional as well. A network architect is often the operational troubleshooter because of his/her knowledge and expertise. The financial expert can take on the role of the supply logistics because of their understanding of the parts and processes involved in the day to day business.
In today’s world, digital transformation has changed how people interact with businesses and conduct their work. They interface with applications on the network. These applications need to be responsive and provide a quality of experience that enables people to appreciate the business and the services they provide. When an application degrades in performance, it negatively affects the user’s experience. This negative experience translates to lost value to revenues, brand, and worker productivity.