Vulnerability Management – Scanning

This is second post in the Vulnerability Management, a practical approach series.I have highlighted that not all suggestions here can fit every possible situation. However, fear not! We can tweak and adapt these recommendations to cater to your needs.

Oops, shouldn’t we take a moment to chat about what goes on during a scanning activity before we dive headfirst into the nitty-gritty of the configurations? Let’s not get ahead of ourselves now, shall we? The scanning activity starts with Discovery Phase. During discovery phase, scanning device tries to find if the asset is online. Once it has been ascertained that the asset is online, port scanning is initiated. In some scenarios when we are not sure if ICMP Ping based discovery methods can be accurate to identify the assets, we include port scanning as the discovery method. During port discovery, if the port has been found to be responding, the scanning device tries to interact with the port and map the response to one signature available in the platform’s service database. Once service discovery is performed, further querying is performed to identify the associated version. The scanning tool also tries to estimate the installed operating system based on open ports, TTL , TCP Sequence values etc. If the open port supports authentication like SSH, SMB etc. and scanning device has a set of valid credentials, it attempts to perform authentication. If authentication succeeds, it enumerate installed packages, installed hostfixes and patches and read the config files to create profile. Once the profile is created, the collected configurations are compared against the vulnerability database maintained by the platform and creates a vulnerability report for the targeted host. This explanation is a simple one to establish the common ground.

Picture this: once we have crafted a nifty inventory tool with all the right attributes, we can gather the relevant stakeholders and laugh while devising scanning strategies. But here’s the catch – before we embark on our scanning adventure, we need to gather multiple parameters and possibly strike a deal with the boss. So, buckle up and let’s tackle one parameter at a time, appreciating the sheer importance of each in our quest for a foolproof strategy.

  • Scanning Name
    • Adopting a naming convention for the scan significantly contributes to organizing the scan structure. We may need to swiftly retrieve specific details for an environment in numerous scenarios. Embracing a well-defined naming convention will facilitate narrowing down the associated scans. Can we have a naming convention that perfectly fits every situation? Unfortunately, that’s not possible. However, let’s delve into one of the options below.
    • Frequency – Business Unit – Network Context – Region – Scanning Environment – Scan Type – Additional Parameters
      • Daily-HR-Internal-AMER-Auth-Prod
      • Weekly-IT-External-Global-Remote-WebApps
      • OnDemand-IT-Internal-APJ-Remote-Staging
      • Monthy-IT-Internal-Global-Remote-ADServers
    • The above convention can help us to organize the assets by scanning cadence, business unit, region and network context. If a task has been given to verify when all Active Directory servers are scanned, we will just look up the scans pointed to ADServers and share the schedule.
    • We can append serial # if the number of targets is more or there is another reason to split the similar scan into multiple.
  • Target List
    • We understand the importance of collaboration with the business owner to develop an automation pipeline that can efficiently fetch the necessary assets for each scheduled scan. It would be helpful to confirm if utilizing a specific Business Unit tag from the inventory tool would be adequate for this purpose. Moreover, gaining a clear understanding of network segregation is crucial. We need to ascertain whether the available scanners can reach the assets or if deploying scanners in each network zone would be necessary. While implementing an ACL to allow traffic through the firewall might be a viable solution, we must consult the network team and provide them with an approximate estimation of the scanning traffic and bandwidth utilization.
    • If the scanning platform supports organizing targets in a group or with a tag, the same can be used as the target in a scan.
    • Typically targets are accepted in different IP address notations and FQDN formats. Example: 10.0.0.0/24, 10.10.10.1,10.10.0-10.10.10.255,utkarshutsava.in etc.
  • Scanner
    • The scanner is like an undercover agent, sneaking around and performing top-secret scanning operations. It’s like a spy on a mission to enumerate and assess all the assets in its path. And just like James Bond, it has different tools and gadgets. One of its favourite tricks is the “selected product and license” approach, like choosing the perfect weapon for the job. So, let the Scanner do its thing and uncover all those hidden treasures!
      • Remote Scanners/Scanner – Remote Scanner or just Scanner, because who doesn’t love fancy names, is like the superhero of the scanning world. It hangs out in a scanning environment, ensuring it has the perfect network communication path to reach its targets. It’s like a secret agent, triggering enumeration and assessment jobs for the targets from a distance. These scanners are the masters of remote authentication, using SSH, SMB, HTTPs, and other cool techniques to get the inside scoop on an asset. They even have root/administrative privileges; talk about power! Now, let’s talk about an interesting scenario. Imagine if the scanner has the superpower of authentication credentials attached to its scans, and the target accepts them. Boom! The scanner goes for it and collects all the juicy details like installed patches, packages, and configurations. It’s all about creating that perfect asset profile, you know? Oh, and don’t forget the importance of setting up an account for authentication. Some organizations still live in the password-rotating era (hey, we all have our quirks), so we might need to explore the wonders of password vaults. But that’s not all, my friend. The service account must be treated like a VIP and given the proper privileges to support configuration and package enumeration. After all, we don’t want any asset profiling mishaps! So there you have it, the glamorous world of Remote Scanners/scanners. They’re like the James Bond of asset assessment, sneaking around, gathering information, and creating asset profiles like nobody’s business.
      • Agent – Oh, hey there, Agent! You’re like the undercover spy of applications deployed on the target itself. Your job? Collect information locally and create a fancy profile for the host. But hey, here’s the catch – you’ve got a bit of tunnel vision sometimes. While you’re busy creating that profile, you might miss out on the bigger picture, just like the bad guys. But fear not, dear Agent, you’ve got two modes of operation. In the first one, you’re a pro at continuous monitoring, scanning and sending back any changes at regular intervals or whenever you’re asked nicely. The second mode? You’re like a clock, ticking away until it’s time to scan and report on all the exciting updates in a fixed period. Keep up the good work, Agent!
        • Scanning through an agent is like navigating a maze but without the hassle of network or credential issues. However, if the agent settings haven’t been properly tweaked, brace yourself for a wild ride of skyrocketing processor and memory utilization. It’s like witnessing a circus act gone wrong!
      • Agentless – So here’s the deal, in this snazzy technique called “Agentless,” the platform gets a little help from its friends, the APIs. These APIs, like the ones from hosting platforms like AWS or Azure, work magic to create an asset profile. They roll up their sleeves and start inventorizing the installed OS, applications, packages, and all that good stuff. Then, they go head-to-head with the security advisories, compare the results, and voila! A vulnerability report highlights any pesky software that’s been giving security nightmares. It’s like CSI for cybersecurity but without the handcuffs and detective hat!
  • Schedule
    • Alrighty, so here’s the dealio – we’ve got two ways to launch these scans: Adhoc/OnDemand or Schedule mode. Now, if we’re just looking for a one-time thing, we’ll go with the on-demand mode and skip the whole scheduling business. But, if we’re in the mood for some repeated launches, then it’s time to activate that schedule! And what does this schedule entail, you may ask? Well, buckle up, ’cause it usually includes the following configurations:
    • Frequency – What should be the configuration of the scan launch. Again, there is no correct answer here. It depends on the organization’s risk appetite, which governs the remediation timelines. If the remediation process is not very aggressive, it doesn’t make sense to keep performing full scope again at shorter intervals. The frequency of scan launch configuration is a crucial decision that varies depending on the specific needs and circumstances of each organization. It is important to consider the organization’s risk appetite and the timelines for remediation when determining the optimal configuration for scans. If the remediation process is not very aggressive, it may not be necessary to perform full scope scans more frequently. Nonetheless, organizations should carefully assess their own unique situation to determine the most appropriate frequency for their scan launch configuration.
      • What if we’re stuck waiting for a month or a quarter just to complete a pesky scan and keep an eye out for sneaky threats? Well, fear not, because automation swoops in to save the day! We can whip up some fancy automation workflow magic that allows us to launch one-time scans for those potential vulnerable assets, all while honing in on those elusive emerging threats. Pretty nifty, right? So long, endless waiting!
    • Duration – This flexibility is subject to the availability of the feature on the platform. Is this important? In my opinion, the ability to fine-tune the running hours for the scan can greatly enhance our scanning efficiency. Let’s consider a scenario where we are conducting a scan that targets both laptops and desktops using remote scanners. If the scan is launched in the late evening when users start logging out, we might not be able to rescan the assets that were missed. If these assets remain unscanned, the scanner may not be able to detect them when it tries to discover them later in the day, resulting in a “not live” status. However, if we have the option to set the running hours, we can specify it as 8 AM – 8 PM. After that time, the scan can be paused or canceled according to the configured schedule. If the schedule is daily, we can cancel it and launch a new instance the next day. But if the schedule is weekly or longer, we can simply pause the scan and resume it the following day.
    • Priority – Oh, sure, this can supposedly be optional in most cases, but hey, in some scenarios where we are using the same scanners to scan multiple environments, and oh, let’s not forget, allegedly due to a time-sensitive request, we may think about quick processing of the scan. In such cases, we should set the scan with the supposedly highest priority, because, you know, it will help in the oh-so-comparatively quicker processing by, apparently, slowing down other tasks. Huh, really?
  • Scan Policy
    • Scan Policy, the mighty magician of the scanning process, holds the key to its kingdom. This mischievous policy object orchestrates a grand performance, covering everything from enumeration to assessment to reporting. And let’s not forget the extra funny business it brings along – mandatory settings, optional settings, and a whole bunch of other tricks up its sleeve. It knows the importance of Credentials and Parallel Scan Processing, but it doesn’t stop there. Oh no, this jester likes to keep things interesting with a delightful array of surprises.
    • Enumeration
      • What should be the discovery process? Just ICMP pings or ARP Scanning? Should we scan for ports to identify if hosts are alive? If so, which ports? All ports? Standard Ports? Custom Ports?
        • Based on the network design, if we’re expecting accurate ICMP pings, we can keep it as one of the discovery parameters. But if we’re not confident in their accuracy, well, TCP port scanning it is! We can follow the list of super popular ports that usually hang around on assets for communication purposes. You know, the cool kids like 22/SSH, 25/SMTP, 80/HTTP, 443/HTTPs, 445/SMB, 3389/RDP, 2222/SSH, 8080-8082/HTTP, 8000/HTTP, 5000/HTTP, 1433/MSSQL, 3306/MySQL, 5432/PostgreSQL, 6379/Redis, and so on. Can’t leave those ports feeling lonely now, can we? 😉
    • Assessment
      • These configuration settings are like dials that let us tweak things related to authentication, service discovery port lists, and detection coverage. It’s like having the power to choose which vulnerabilities to scan for, especially when we know we won’t have any credentials. For example, if we’re only scanning Windows laptops, we can narrow down our detection to focus on that. And let’s face it, when our schedule is as hectic as running full scans daily, we can’t always do that if the assets are complex. So, it’s a good idea to make it a practice to run a full port scan at least once every month or two. Just remember not to mess up and accidentally overwrite all that juicy vulnerability information with conflicting configurations. We can also think if we need to run with debug logs enabled to support any troubleshooting/false positive analysis?
    • Reporting – Basic configuration focussed on what to present and in which format. Do we need to see dead host? Do we need to see superseded patches etc.?
    • Additional Configurations
      • Credentials – Relevant Authentication credentials which can be leveraged to establish a remote authenticated session between scanner and target host for more accurate profiling.
      • Parallel Processing – Ah, the joys of swift outcomes! Those scanning tools sure know how to process tasks simultaneously, giving us that rush of speed. But hey, remember, speed thrills, but it can also be a killer. So, let’s not go overboard and overload those targets with an excessive number of concurrent connections and an avalanche of command execution. We must always fine-tune the default settings, taking into account the unique quirks of each scanning environment. After all, not all environments respond in the same way to a little scanning activity. Happy scanning, my friend!

Vulnerability Scanning Platforms: Tenable, Qualys, Rapid7,NMAP

That’s a wrap, folks! Hopefully, this little nugget of wisdom has been as useful. Buckle up, because in the next, we’ll be diving head first into the enchanting world of vulnerability management reporting. Grab your popcorn and strap in tight 🙂

One thought on “Vulnerability Management – Scanning

Add yours

Up ↑

%d bloggers like this: