We recently held a Penetration Panel webinar that consisted of a nice mix of our attack and defense teams. The event afforded participants an opportunity to submit questions to the experts prior to the start of the webinar. One of the questions that I was slated to answer was “Describe the best practice methods you’ve discovered work best to prevent/detect unauthorized access.” My answer at the time was something to the effect of “no silver bullets, security is an onion, network baselining, and good documentation.”
To be honest, I feel like I cheated the audience out of a well-articulated answer given the relatively short time we had and the complexity of the issue. I still believe that security really is best accomplished when layered or placed in tandem with other best practices. The more we are able to layer on, the better off our environment is overall. As long as we are not hindering the users of the environment to the point that they feel the need to circumvent the security in place to bring back functionality or usability.
Here are the practices I feel will get you the best return for your efforts.
Documentation
The one thing we all love but cannot stand doing is getting our documentation in order. The bottom line is simple: this is the first and most important step defensive teams should be taking. Proper documentation should include critical business processes, how those processes move data about the network, and any systems or exceptions put in place to support them.
Data Classification
Data classification is one of those practices that is normally overlooked. It’s often thought to be either extremely complicated or incredibly simplistic, but the truth is that data classification has a pivotal role within every organization and should be a starting point for many other practices. You may ask yourself why I would consider this such a critical step. How do we know what data needs protecting if we don’t take the time to identify and label it correctly? The general assumption is that all data needs protection, and that is not incorrect. But there are varying degrees of protection that need to be applied. This is where classification comes into play. It helps separate Public data from Internal-only data, and Internal-only data from Classified or Sensitive data. If we lump all data to one single file server or share then what is left to separate access?
Role-based Access
Similarly, if all your data is lumped together on a single file server or share what is left to separate access? User permissions? Sure, permissions to access a file can be managed effectively, but what happens when we admins get lazy? Picture this: you just received a ticket that Janice from the Help Desk finds she can’t stand assisting end users any longer and decides to move to accounting (ok, horrible example, but bear with me). We get the change notice, add the permissions to the accounting file share, and call it a day. Now Janice can access the data within accounting required for her new position, but what about all the shares to which she held access previously? As a member of the Help Desk it’s possible (and likely) she could access areas of the network that no account would have reason to poke and prod. Properly implementing role-based access will allow us to define groups that have access to specific systems or directories. Then, we can drill down further and manage user level permissions. If we define roles, create the necessary security groups, and use the role groups to manage access to directories, eventually we can keep permission creep at a minimum by having singular points of management. And don’t forget to generate alerts when a user account attempts to overstep its boundaries!
Network Baselining
Network baselining is another practice that can help strengthen an organization’s security posture by giving the security team a visual understanding of what is going on within the network. If we can measure our utilization, see what our peak network utilization times are, and understand what protocols and ports our infrastructure requires to function, then when something deviates from the norm, we can spot it more efficiently. This allows for the accurate tailoring of alerts and swifter response times.
Segmentation
Segmentation, in my opinion, is an art that appears to be dying out, and that is truly a shame. The ease of plug-and-play devices and the high demand “get it all done now” have negatively impacted our ability to architect networks with security in mind. Proper network segmentation helps to improve security through defining zones, separating data access (data classification comes back into play) and allows us to limit administrative functions down to systems that reside within specific LANS or VLANS. Segmentation should not just be limited to the network but should incorporate both user and application segmentation.
Log Correlation, Aggregation, and Alerting
Last but certainly not least is logging and alerting. All the previously mentioned practices should find a way to feed into logging and alerting. This service, when done correctly, is our primary means of determining that something is amiss, someone is attempting to gain access to resources they shouldn’t be accessing, or that a breach has occurred. Logs are also the way we can recreate situations that lead up to a breach and give us key information for how to prevent the scenario from occurring again.
If we are able to put all these services together and have them working cohesively, we are no longer operating at a reactive level but gain the ability to proactively search for issues and anomalies. We can then begin to stop chasing after threat actors and start hunting them down.