Continuing with our #ExpectMore series, I want to explore some commonly used terms and what they might look like during a penetration test in your environment. “Visibility”, “Actionable Metrics”, these terms are frequently used in the cybersecurity world by vendors and their clients. I want to define them, display our perspective on their importance, and then (hopefully) shed some light on how to achieve these two things in your own organization. SPOILER ALERT, no product or vendor completely eliminates your team doing some of the lifting on these however certain partners may provide more benefit than others if you know what to look for.
When our team of penetration testers is working, they document vulnerabilities and the exploits they attempt. At the conclusion of their efforts, they present a list of things that were attempted and ask for any artifacts (logs, alerts, screenshots) of the tools and techniques they executed within your environment. Below are some examples, with results, from a few engagements.

The first example shows the team exploit a missing patch and then moving through the environment using different techniques. Step 1, scanning, can be detected by either network security overlays such as firewalls and IDS/IPS and may be monitored internally or by a managed service provider (MSP). Step 3, exploitation, is detected by a host-based agent like Windows Defender, CrowdStrike, or similar product. When those products detect and stop an attack, the test team works towards getting a credential to use to disable the protective software. In both cases, detection of either the payload or disabling of the software happens in less than 10% of our engagements. Step 6, account creation, is the most detected item in our chain of events and is a feature in most modern MS-Windows servers.
The next example shows the team exploit weak protocols in use. This technique has been the dominant exploit for the past decade in capturing a user credential for password cracking. Step 1 can be detected by monitoring your environment for protocol poisoning and alerting on hosts that attempt these attacks. Our team has only seen this detected once in all of our testing. It was detected by the organization’s MSP and an email alert was sent to the IT department immediately.

Most organizations have the tools and services necessary to detect and respond to our scanning, poisoning, exploitation, and account creation techniques but the detections do not turn into actionable alerts in most of our testing. Being able to detect these events within your environment is a critical security component and it’s why we list the steps taken to gain access to hosts, services, and data when we are successful.
With visibility covered let’s move into actionable metrics. Reports typically include a large amount of verbiage and data which can often be overwhelming. We previously discussed our C/H score and Security Debt metrics in another blog post. In his post, Kevin also discussed vulnerability and remediation categories.
For the technician knowing which IP address needs a patch or a configuration change is an essential part of their security-related activities. As we move up the chain of command to make operational, financial, and partnership decisions the vulnerability categories become more important. The primary reason is that eliminating root causes moves from day-to-day firefighting to root cause remediation which should make your organization more efficient. The efficiency comes from being able to stop chasing fires and get back to serving your customers in whatever industry you find yourself. To quote from the previously referenced blog:

As an example, the previous graphic states that the first step the team used to gain privileged access was the exploitation of weak protocols in use. It may be tempting, and necessary, to deploy someone to fix the issue immediately on the device exploited however the larger issue is that devices CAN, and appear, to be deployed with an insecure configuration. The graphic below is from our Perigon360 application and shows that misconfigurations were the primary source of vulnerabilities discovered in the scanned environment.

The following graphic then, naturally, shows that configuration management would remediate or mitigate the most vulnerabilities in the tested environment. The development of a policy and procedure which deploys only tested/hardened operating systems, software, firmware etc. is what, if enforced, would deliver the most impact.

In building your program, the ability to detect protocol poisoning is the reactive measure. This would allow you or your staff to detect and respond to the attack. The proactive step is to develop and deploy hardened devices which would not be vulnerable to the attack to begin with. The administrative aspect is to have the necessary policy and procedures to deploy the hardened images in conjunction with the proper budget(s) in time, materials, and training to recognize current threat and attack vectors for your organization. Going back to Security Debt from the last blog we can see how old some of these misconfigurations are as shown below.

All of these metrics provide the context needed when having your environment assessed by either your own team or an outside vendor. Knowing the contextual factors regarding your security posture will ensure the most effective use of your effort and dollars. At Contextual Security Solutions we want you to #ExpectMore to make the best decisions for your organization.


Recent Comments