I remember when I first learned about Web application firewall technology. It seemed like magic to me: A device that could compensate for bad coding or unexpected/unintended web application functionality. It could do this by learning expected application behavior and then enforcing said behavior, even if the application itself was capable of allowing the unwanted behavior. The business case for such a technology is easily recognizable even more so today than it was in the mid- to early 2000’s when it first came out: the ability to have a device compensate for human error.
In this blog post we will cover the basics of building a truly resilient network where throughput isn’t always important, but reliability and redundancy are. We will look at this from the operators’ stand point.
Public clouds are great for managing applications and data without the frustration and expense of supporting the underlying infrastructure. When I lease a car, I am able to use it for the standard tasks that I perform. Functionally, the car is able to do the same things as a vehicle that I could purchase. I can run errands, drive to work, or even take trips.
One of the main advantages of leasing the car is that when there is a problem or maintenance needs to be performed, I am not responsible. The automobile dealer where I leased the car from handles all of those tasks. Oil changes, filter replacements, and all significant work to keep the car running well is taken care of without my intervention beyond bringing the car in when requested.
Security is an ever-evolving concept in theory and application. It is important to deploy and leverage technologies that can adapt and change with our security models. In the technology world, when the networking and application protocols were initially developed, minimal thought was given to security. Protocols like Telnet, FTP, DNS, SMTP, and even HTTP were designed for function and user-experience, not integrity.
I recently met with a regional cloud service provider (CSP) that has adopted provisioning on demand as their IT model. They spin up applications on demand, having virtualized most of their infrastructure and have developed tools to automate the provisioning of applications and servers for customers/tenants through a self-service portal. Rather than build-out and manage more and more physical infrastructure, with associated time and expense, the CSP is adopting the concepts of software-defined data center (SDDC) that builds on virtualization (of software, networking and storage) to offer software and network services for many different clients. More importantly, the CSP is also catering to needs of DevOps and IT architects – both internally, and externally for its tenants/clients by enabling true self service through automation.
I’m back with another exciting installment on SCADA security. Today I want to cover authentication and System redundancy.
It should be obvious, but authentication takes on an even more important role in securing SCADA environments. If you can’t protect the traffic coming in, you should at least ensure that the traffic is coming from a trusted source. This is one of the most emphasized points in the U.S. governments’ SCADA compliance (we see different countries having similar requirements for the SCADA/PCD environments throughout the world). I’m also glad to say that this is one part of compliance that most customers comply with because it’s easy and there is no business risk. You can send out a token and presto! Two-factor authentication is in place. That’s what the law requires and that’s what most companies that need to comply do. Yes, I was very specific in my wording. They send out one token to each of their component manufacturers set up one shared account for each of their equipment suppliers. In other words, they comply but completely miss the point.
Four Days. Four days is what is takes for 108,000 technologists to gather in the enchanting city of Barcelona to tell the world what they can expect to experience in the future of mobile communications. Four days is also about the number of days it takes to recover from sleep deprivation, work backlog, and the general buzz that one experiences by being part of the spectacle as grand and electrifying as Mobile World Congress.
The nice part about reflecting on MWC 2017 is that it is very easy to select a handful of themes that permeated throughout all the exhibition halls, keynotes, and hallway chatter. For me, this is the list: IoT, 5G, Virtualization, and Artificial Intelligence.
We build security solutions to protect our networks from the rest of the internet, but do we do anything to protect the network from our own employees and users? The first line of protection for your networks is not the firewall or other perimeter security device, it is the education and protection of the people that use the network. People are concerned about having their apartments or homes broken into so they put locks on the doors, install alarm systems, or put surveillance equipment like security cameras around the property. They are vigilant about making sure that an unauthorized intruder cannot enter the home easily without detection and alarms being raised.
The automobiles in the late 19th and early 20th century did not have a complex dashboard displaying a multitude of information like we have today. The industry was very young and the inventors and technologists focused on making sure that these ‘horseless carriages’ went from point A to point B. Builders and consumers did not have the time and capacity to incorporate extensive diagnostics and metrics to understand the state and performance of these vehicles.
As the automobile technologies matured, dashboards were put in place to give people information about how the vehicle was performing. Speedometers tell us how fast we are going. Oil and temperature gauges give us insight into the health of the engine. Air pressure monitors let us know when to add air to our tires. Today, we even have built-in compasses and GPS systems that can pinpoint our location on the planet within a few meters.
Hey folks, I’m back with my second installment on protecting the un-protectable:
Last week we discussed the SCADA environment and some of the unique business and technology challenges we face when trying to secure it both from availability and cyber security hazards. The questions you are all asking yourself now are “how did we get here?” “Why would anyone build anything this insecure?” The answer is so simple … we never anticipated these networks would communicate with the outside world. PCD and SCADA environments were meant to be “closed loop” and therefore air-gapped (If you’re air gapped, you don’t need security, right? Ask Iran about the Natanz nuclear facility). If you think about it, that was a perfectly good assumption. Why would factory machinery ever need to access the internet, or a power plant, or an oil rig… I could go on and on. However, this paradigm changed for two reasons.