Important Update: Community URLs redirect issues are partially resolved. Learn More. .

cancel
Showing results for 
Search instead for 
Did you mean: 
SteveSchlarman
Archer Employee
Archer Employee

When I started this blog series, I referenced our latest SBIC (Security Business Innovation Council) report – Transforming Information Security: Future-Proofing Processes.  One of the points covered in that report highlighted the need for evidence-based controls assurance.   The need to have a more tangible, fact based approach to measure controls within an organization is fueled by the reality that empirical data provides a higher level of confidence on the effectiveness of the control.  While the traditional audit methods of sampling and validation provide point-in-time assessments – and many times, valuable face time between control testers and control owners – the fact is that given the velocity of security threats in today’s environment, this style of control assurance is past its prime.  The report stresses that an ongoing collection of relevant data to test the efficacy of controls is necessary to “future proof” security processes.

 

While organizations strategize on putting in the necessary data collection and aggregation points to constantly poll controls, there are many opportunities to improve the ongoing assessment of controls using existing processes.   The team managing security incidents within Security Operations is in an excellent position to provide this type of visibility.   Those individuals see evidence of control efficacy every time an alert crosses their screen - even an instance of a virus infection on an endpoint provides insight.  Did the local Anti-Virus find and quarantine the virus?  If so, that control worked.  If it didn’t, why not?  What control did identify the virus?  Was it the end user reporting an issue to the help desk who investigated and identified the virus?  Then the virus education part of the security awareness program seems to be working.  Somewhere along the line a control worked, and many times, one or more controls missed an opportunity.

 

This leads to an important part of today’s security operations management strategy – post incident analysis and control efficacy.   The concept of post incident analysis is most often relegated to the big incidents.  The post mortem held after a major incident gives the organization lots to think about.  What went wrong, what went right, what needs to change, etc.   However, it is those daily – dare I say mundane – alerts and events that really give important insight into the operational fabric of the security controls.  The hard part is implementing a post-incident analysis for every alert tracked down and resolved during a day’s work.  It isn’t uncommon for help desk operations to do some quick root cause breakdown on trouble tickets coming in from end users.  Security Operations should be the same.

 

To implement even a simple routine for post-security event analysis, the organization should:

  • Catalog a basic set of common security controls.  This should be a collection of both technical and process oriented controls.  It doesn’t have to be an endless list of every possible security control – even beginning with the basics is a start.  Catalog the major security tools implemented (AV, Firewall, IDS, etc.) along with the most common escalation or security monitoring processes (security awareness, help desk, access control reviews, etc.).
  • Institute a quick post-event process to map security alerts and events to this catalog.   If the control catalog is reasonable, then it should not add an administrative overhead to the process.  It also helps if you have a system (such as our Security Operations Management module) that has this post-event process built in.
  • Include controls that worked and didn’t work in the mapping. Mapping only those controls that were the source of the alert isn’t the only objective here.  We want to find those controls that “should” have worked but for some reason didn’t.  The virus missed by the AV scan is an easy one.  Identifying other “missed control opportunities” may require a little more probing into the event.    However, a key goal is to identify what didn’t work and why.  It could be that the control needs a little tuning, or it could indicate a much bigger issue – these are things we are looking for.
  • Enhance the catalog over time.  Obviously, there will be controls that pop onto the radar based on the experiences of the security team and other controls will be determined as unnecessary in the catalog.  The catalog should be a living document within the event process.

Over time, the catalog will begin to reveal those effective controls that consistently are identifying and escalating security issues.  It should also reveal those controls that are missing opportunities to prevent, detect or contain security threats.  This type of information is not only key to understanding control efficacy, it can go a long way in rationalizing investments and providing overall control assurance based on solid evidence.

 

To see how Control Efficacy is incorporated into our SOC Readiness process in our Security Operations Management module, along with many other key SOC processes, take a look at our new practitioner guide.