Imagine browsing your favorite websites on your computer or playing a browser-based game when things start slowing down. You click the window in frustration hoping that the site responds, to no avail. Finally, the browser alerts you that something is making it run too slow and you need to reset it.
In the year 1453, the Ottoman Empire under Sultan Mehmed II was able to accomplish what none before them had ever been able to achieve. For more than a millennium, Byzantium had remained a bastion of the Orthodox faith, the great kingdom of the East. The hordes and barbarians that had caused the downfall of so many other empires had been unable to conquer this unconquerable city. Until one day when it all changed.
I remember when I first learned about Web application firewall technology. It seemed like magic to me: A device that could compensate for bad coding or unexpected/unintended web application functionality. It could do this by learning expected application behavior and then enforcing said behavior, even if the application itself was capable of allowing the unwanted behavior. The business case for such a technology is easily recognizable even more so today than it was in the mid- to early 2000’s when it first came out: the ability to have a device compensate for human error.
In this blog post we will cover the basics of building a truly resilient network where throughput isn’t always important, but reliability and redundancy are. We will look at this from the operators’ stand point.
Public clouds are great for managing applications and data without the frustration and expense of supporting the underlying infrastructure. When I lease a car, I am able to use it for the standard tasks that I perform. Functionally, the car is able to do the same things as a vehicle that I could purchase. I can run errands, drive to work, or even take trips.
One of the main advantages of leasing the car is that when there is a problem or maintenance needs to be performed, I am not responsible. The automobile dealer where I leased the car from handles all of those tasks. Oil changes, filter replacements, and all significant work to keep the car running well is taken care of without my intervention beyond bringing the car in when requested.
Security is an ever-evolving concept in theory and application. It is important to deploy and leverage technologies that can adapt and change with our security models. In the technology world, when the networking and application protocols were initially developed, minimal thought was given to security. Protocols like Telnet, FTP, DNS, SMTP, and even HTTP were designed for function and user-experience, not integrity.
I recently met with a regional cloud service provider (CSP) that has adopted provisioning on demand as their IT model. They spin up applications on demand, having virtualized most of their infrastructure and have developed tools to automate the provisioning of applications and servers for customers/tenants through a self-service portal. Rather than build-out and manage more and more physical infrastructure, with associated time and expense, the CSP is adopting the concepts of software-defined data center (SDDC) that builds on virtualization (of software, networking and storage) to offer software and network services for many different clients. More importantly, the CSP is also catering to needs of DevOps and IT architects – both internally, and externally for its tenants/clients by enabling true self service through automation.
I’m back with another exciting installment on SCADA security. Today I want to cover authentication and System redundancy.
It should be obvious, but authentication takes on an even more important role in securing SCADA environments. If you can’t protect the traffic coming in, you should at least ensure that the traffic is coming from a trusted source. This is one of the most emphasized points in the U.S. governments’ SCADA compliance (we see different countries having similar requirements for the SCADA/PCD environments throughout the world). I’m also glad to say that this is one part of compliance that most customers comply with because it’s easy and there is no business risk. You can send out a token and presto! Two-factor authentication is in place. That’s what the law requires and that’s what most companies that need to comply do. Yes, I was very specific in my wording. They send out one token to each of their component manufacturers set up one shared account for each of their equipment suppliers. In other words, they comply but completely miss the point.
Four Days. Four days is what is takes for 108,000 technologists to gather in the enchanting city of Barcelona to tell the world what they can expect to experience in the future of mobile communications. Four days is also about the number of days it takes to recover from sleep deprivation, work backlog, and the general buzz that one experiences by being part of the spectacle as grand and electrifying as Mobile World Congress.
The nice part about reflecting on MWC 2017 is that it is very easy to select a handful of themes that permeated throughout all the exhibition halls, keynotes, and hallway chatter. For me, this is the list: IoT, 5G, Virtualization, and Artificial Intelligence.
We build security solutions to protect our networks from the rest of the internet, but do we do anything to protect the network from our own employees and users? The first line of protection for your networks is not the firewall or other perimeter security device, it is the education and protection of the people that use the network. People are concerned about having their apartments or homes broken into so they put locks on the doors, install alarm systems, or put surveillance equipment like security cameras around the property. They are vigilant about making sure that an unauthorized intruder cannot enter the home easily without detection and alarms being raised.