Data security is an interesting field. It seems like we have all these solutions and yet breaches seem to be occurring at more rampant levels. Simple things like Web 2.0 actually manage to defeat almost all security measures and kids can defeat most corporate systems, so, how secure are we?
Lets look at the typical security stack in a company:
Perimeter firewall
Some kind of virus/malware solution (desktop or server or Email)
IDS / IDP (Intrusion Detection or Prevention Systems)
URL filtering or other UTM (Unified Threat Management)
Perhaps some logging
Perhaps a proxy
Perhaps a web application firewall (good possibility it isn’t actively enabled)
It’s really pitiful in some regards. In general, the whole stack seems to work on making us secure by looking for and denying bad stuff. This leads us into a very dangerous analogy! That which is not bad, must be good. It seems like a border checkpoint that relies on some manual do not enter (think no fly) list and some self-answered security questions.
The firewall is the worst of them all. It seems to be a fancy bridge/router to connect two Ethernet wires. The security model is essentialy self declaration of the packets. For instance, are you web traffic? Yes, I’m port 80. Okay then, come on through. No need to be stopped or inspected. Really, port 80 is just web browsing? Not anymore, it’s file transfers, it’s bandwidth robbing, it’s data leakage, it’s phone home, it’s everything now. Firewalls are useless, about all they do now is slow down legitimate traffic. Firewalls don’t address the intent, actions or characteristics of the data. Is the data being used to evade security, transfer data, used for excessive bandwidth, used to tunnel other applications, used by malware, prone to vulnerabilities, etc. We also can’t tell who is using it. We just see IP addresses in ever larger generally dynamic (DHCP) networks where we might eventually figure out what device if we look soon enough while an address lease is still active. However, depending on the device, we still don’t know who the actual user is. So, the firewalls tend to not know anything about the actual applications, data, users, characteristics, threats borne in the content and they slow traffic down, wonderful.
OK, but we can layer on IDS/IDP, proxies, URL filtering, A/V scanning, DLP and lots of other magic boxes. We create a sprawl of technology and devices to learn and try to correlate. I won’t even get into the management or problematic performance and context awareness. I hate the underlying principle: “It must be good if it isn’t bad.” Wow, that’s messed up! We should be judging good and bad based upon characteristics and actions. It’s not who the user is and their previous reputation, rather, what are they doing now? The problem isn’t just using bad applications or bad sites, it’s also making sure threats don’t exist on approved sites and applications. It’s a mentality from mail servers. We have the approved corporate mail servers, but of course we still have to inspect content for threats. So, what makes much more sense is disallowing applications and sites that are inappropriate and then making sure approved sites, URLs and applications (e.g. Facebook) are not used in inappropriate manners or to propagate threats (e.g. Koobface).