Important Update: Community URLs redirect issues are partially resolved. Learn More. .

cancel
Showing results for 
Search instead for 
Did you mean: 
ChrisHoover
Archer Employee
Archer Employee

So far in this series I talked about CM documentation and CM models. What’s next?

     Beginning with the assumption that you are already using NIST RMF (or possibly DIACAP if in the military space) you build on what you’re already doing. This means you already have defined roles and responsibilities for A&A/C&A processes. Start there. It is essentially the same process but more frequent. CM should also start with the tools you already have. Implementing CM doesn’t mean reinventing and replacing all your current processes and tools. The A&A stakeholders need to first get together and establish monitoring strategies and agree on some model of measures and metrics.

       Every system owner needs a monitoring strategy for every Information System they own. Every common control provider needs a monitoring strategy for the controls they offer. A monitoring strategy should 1) account for every control that is allocated to a system and 2) say whether each control is manually or automatically assessed and 3) at what frequency.

     Which should be automated? Any that you can, right? Sadly, a small percentage of the controls in NIST SP 800-53 are conducive to automated assessment, and NIST has not made a declaration (or even recommendation) on which they are. For now, this has to be determined by your organization. One good indicator is that controls beginning with “The Information System…” are usually better candidates for automated assessment than the ones that begin “The organization…”. Some families are more automatable than others.

     It also boils down to which tools you have in place. If you have a good configuration scanning tool and program in place, you can automatically assess a few controls in the CM family. If you have an asset management tool, you can automate CM-8. Automated patch management gets you SI-2. Etc. You get the idea. This will be a tedious process for your organization to go through the first time. It will be slightly different for each Information System and each organization, but even an organization with mature processes and technologies would be challenged to automate a couple dozen controls worth of assessments.

The rest will of course be manual. AT, CP, MP, and PS are examples of control families with few if any automatable controls. Policy and process-oriented controls, physical and personnel controls, and even many technical controls cannot be automatically assessed. This is the point where, if you’re one of those organizations that brings in a third party assessor every time ($$$), you may want to reconsider hiring your own internal assessors. You’ll get your money’s worth out of them.

So, most of you know about SCAP by now. It’s a group of specialized XML formats that enable automation and enable security tools to share scan data, analyses, and results. I won’t talk about SCAP today in the interest of brevity, other than to say, of course, use it where you can and upgrade to tools that use SCAP when you can. The common language element provided by XML and by SCAP means that disparate tools and organizations can share data now where they couldn’t before, including high-volume, frequent scan data across huge ranges of hosts. SCAP has reduced obstacles in scanning and reporting workflow and so, has increased the frequency with which some scans can be performed. The point of interjecting SCAP into this discussion is to point out that automated assessments will be streamlined by this new technology, and automation is a significant consideration as to the frequency of assessments. Which leads to…

How to determine the frequency for each control assessment? There are a few important factors:

What is the criticality of the Information System? This can be decided by the criticality from a BIA and/or from the Security Category assigned according to FIPS 199 & NIST SP 800-60. A system with a higher criticality or Security Category should have its controls assessed more often. A system with a lower criticality or Security Category should have its controls assessed less often.

Is it automated or manual? Manual controls can likely not be assessed as frequently as automated controls. This is just a sheer logistical truth. A fixed amount of employees can only perform so many manual assessments in an allotted time. Automated controls, however, despite the potential for much higher frequency, should only be assessed as often as is useful. An enterprise patch scan may be run daily, for example. Running two patch scans a day would take twice the effort, but may not be twice as useful.

How volatile is the control? A control that is known to change more often should be assessed more often. This means, for example, configuration checks should be assessed more often than a written policy, because the former would change much more often than the latter. (I am just finishing a white paper on the subject on CM monitoring strategies. It should be available in the next week or so. Email me if you’d like a copy when it is done.)

So, you’ve figured out your monitoring strategy. Next is implementation and then scoring and reporting, which you have to figure out on your own because it’s specific to your environment and organization - but email me if you’d like to see a demo of Archer’s CM solution, and even if you don’t decide to buy it on the spot, you can maybe get some ideas you can use by seeing how we’ve done it. I’d also mention for the scoring piece, look at iPost and the original CAESARS for clear, simple, scoring models.

     Lastly, how do you know when you’re done? You’ll probably never be done, right? … but how do you know when you’re adequate? To answer this, I will close with this cool little yardstick, (with thanks to Peter Mell of NIST). This scale gives us all a lot to aspire to.

Level 0: Manual Assessment – Security assessments lack automated solutions

Level 1: Automated Scanning

o    Decentralized use of automated scanning tools

o    Either provided centrally or acquired per system

o    Reports generated independently for each system

Level 2: Standardized Measurement

o    Reports generated independently for each system

o    Enable use of standardized content (e.g., USGCB/FDCC, CVE, CCE)

Level 3: Continuous Monitoring

o    Reports generated independently for each system

o    Federated control of automated scanning tools

o    Diverse security measurements aggregated into risk scores

o    Requires standard measurement system, metrics, and enumerations

o    Comparative risk scoring is provided to enterprise (e.g., through dashboards)

o    Remediation is motivated and tracked by distribution of risk scores

Maturity level 4: Adaptable Continuous Monitoring

o    Enable plug-and-play CM components (e.g., using standard interfaces)

o    Result formats are standardized

o    Centrally initiated ad-hoc automated querying throughout enterprise

o    on diverse devices (e.g., for the latest US-CERT alert)

Maturity level 5: Continuous Management

o    Risk remedy capabilities added (both mitigation and remediation)

o    Centrally initiated ad-hoc automated remediation throughout enterprise on diverse devices (with review and approval of individual operating units)

o    Requires adoption of standards based remediation languages, policy devices, and validated tools

Thanks for tuning in for this 3-part series on continuous monitoring! As always, any questions or comments, email me.

Chris