Management and monitoring in Software Defined Data Centers (SDDC) benefit from automation principles, programmability, API and policy-driven provisioning of application environments through self-service templates. These best practices help application owners to define, manage and monitor their own environments, while benefiting from the performance, security, business continuity and monitoring infrastructure from the IT teams. SDDC also changes the way IT designs and thinks about infrastructure – the goal is to adapt to demands of continuous delivery needs of application owners in a “cloudy” world.
Natural disasters serve as excellent examples of the unforeseen consequences that a cyber-attack against infrastructure will have. Take for example a strong windstorm in Wyoming in February 2017. The storm knocked down power lines, forcing water and sewage treatment plants to operate on backup generators, which weren’t available to some of the pumps that moved sewage from low-lying areas to higher ground. As a result, the sewers backed up after the weather continued to prolonged the outage. While government officials tasked with disaster planning have long focused on the cascading effects of power outages from natural disasters, only recently have they realized the effects of cyber warfare could be quite similar.
Hey folks, I’m back with my second installment on protecting the un-protectable:
Last week we discussed the SCADA environment and some of the unique business and technology challenges we face when trying to secure it both from availability and cyber security hazards. The questions you are all asking yourself now are “how did we get here?” “Why would anyone build anything this insecure?” The answer is so simple … we never anticipated these networks would communicate with the outside world. PCD and SCADA environments were meant to be “closed loop” and therefore air-gapped (If you’re air gapped, you don’t need security, right? Ask Iran about the Natanz nuclear facility). If you think about it, that was a perfectly good assumption. Why would factory machinery ever need to access the internet, or a power plant, or an oil rig… I could go on and on. However, this paradigm changed for two reasons.
1. Focus on availability-security
Latency is a high focus for these folks. Most just focus on confidentiality and integrity-based security models. All three aspects need to be focused on to ensure comprehensive security.
2. Understand the value & meaning of architecture as it relates to attacks
- Placement of technology devices in the environment is key
- Types of technologies leveraged (e.g. leveraging UDP, CDN, stateful devices, etc.)
- Know the limitations of business-logic decisions — RFC and ISO compliancy may be, ironically, in the end a known vulnerability (e.g. leveraging RFC compliant web applications)
- Deployment of 80% of known technical and operational controls is no longer adequate. A process must be in place to be able to technically and operationally lock down your environment during a cyber-attack 100%.
- Not relying on a single point of security technology to do the entire job (e.g. security in-depth)
- Use of encrypted technologies (e.g. SSL / TLS)
Virtualization of existing technologies is an evolutionary step in the development of cloud designs. The cloud is supposed to be an architecture that delivers applications and data in a reliable and fault-tolerant manner. The benefits that we want to derive are not new. We are just applying them to a different business model. We created the cloud to deliver applications and data anytime, anywhere, and to any device. We need to reconfigure existing processes and technologies to support the evolving cloud architecture.