Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Pentest Documentation & Reporting

Preparation

Notetaking & Organization

Notetaking Sample Structure

There is no universal solution or structure for notetaking as each project and tester is different. The structure below is what can be helpful but should be adapted to your personal workflow, project type, and the specific circumstances you encountered during your project. For example, some of these categories may not be applicable for an application focused assessment and may even warrant additional categories not listed here.

  • Attack-Path - An outline of the entire path if you gain a foothold during an external pentest or compromise one ore more hosts during an internal pentest. Outline the path as closely as possible using screenshots and command output will make it easer to paste into report later and only need to worry about formatting.
  • Credentials - A centralized place to keep your compromised credentials and secrets as you go along.
  • Findings - It’s recommended creating a subfolder for each finding and then writing your narrative and saving it in the folder along with any evidence. It is also worth keeping a section in your notetaking tool for recording findings information to help organize them for the report.
  • Vulnerability Scan Research - A section to take notes on things you’ve researched and tried with your vulnerability scans.
  • Service Enumeration Research - A section to take notes on which services you’ve investigated, failed exploitation attempts, promising vulns/misconfigs, etc.
  • Web Application Research - A section to note down interesting web applications found through various methods, such as subdomain brute-forcing. It’s always good to perform thorough subdomain enumeration externally, scan for common web ports on internal assessments, and run a tool such as Aquatone or EyeWitness to screenshot all applications. As you review the screenshot report, note down applications of interest, common/default credential pairs you tried, etc.
  • OSINT - A section to keep track of interesting information you’ve collected via OSINT, if applicable to the engagement.
  • Administrative Information - Some people may find it helpful to have a centralized locations to store contact information for other project stakeholders like Project Managers or client Points of Contact, unique objectives/flags defined in the Rules of Engagement, and other items that you find yourself often referencing throughout the project. It can also be used as a running to-do list. As ideas pop up for testing that you need to perform or want to try but don’t have time for, be diligent about writing them down here so you can come back to them later.
  • Scoping Information - Here, you can store information about in-scope IP addresses/CIDR ranges, web application URLs, and any credentials for web applications, VPN, or AD provided by the client. It could also include anything else pertinent to the scope of the assessment so you don’t have to keep re-opening scope information and ensure that you don’t stray from the scope of the assessment.
  • Activity Log - High-level tracking of everything you did during the assessment for possible event correlation.
  • Payload Log - Similar to the activity log, tracking the payloads you’re using in a client environment is critical.

Notetaking Tools

There are many tools available for notetaking, and the choice is very much personal preference. Here are some of the options available:

  • CherryTree
  • Visual Studio Code
  • Evernote
  • Notion
  • GitBook
  • Sublime Text
  • Notepad++
  • OneNote
  • Outline
  • Obsidian
  • Cryptpad
  • Standard Notes

Logging

It is essential that you log all scanning and attack attempts and keep raw tool output wherever possible. This will greatly help you come reporting time. Though your notes should be clear and extensive, you may miss something, and having your logs to fallback can help you when either adding more evidence to a report or responding to a client question.

Exploitation Attempts

Tmux logging is an excellent choice for terminal logging, and you should absolutely be using Tmux along with logging as this will save every single thing you type into a Tmux pane to a log file. It is also essential to keep track of exploitation attempts in case the client needs to correlate events later on. It is supremely embarrassing if you cannot produce this information, and it can make you look inexperienced and unprofessionalas a pentester. It can also be a good practice to keep track of things you tried during the assessment but did not work. This is especially useful for those instances in which you have little to no findings in your report. In this case, you can write up a narrative of the types of testing performed, so the reader can understand the kinds of things they are adequately protected against. You can set up Tmux logging on your system as follows:

First, clone the Tmux Plugin Manager repo to your home dir.

d41y@htb[/htb]$ git clone https://github.com/tmux-plugins/tpm ~/.tmux/plugins/tpm

Next, create a .tmux.conf file in the home directory.

d41y@htb[/htb]$ touch .tmux.conf

The config file should have the following contents:

d41y@htb[/htb]$ cat .tmux.conf 

# List of plugins

set -g @plugin 'tmux-plugins/tpm'
set -g @plugin 'tmux-plugins/tmux-sensible'
set -g @plugin 'tmux-plugins/tmux-logging'

# Initialize TMUX plugin manager (keep at bottom)
run '~/.tmux/plugins/tpm/tpm'

After creating this config file, you need to execute it in your current session, so the settings in the .tmux.conf file take effect. You can do this with the source command.

d41y@htb[/htb]$ tmux source ~/.tmux.conf 

Next, you can start a new Tmux session.

Once in the session, type [CTRL] + [B] and then hit [Shift] + [I], and the plugin will install.

Once the plugin is installed, start logging the current session by typing [CTRL] + [B] followed by [CTRL] + [P] to begin logging. If all went as planned, the bottom of the window will show that logging is enabled and the output file. To stop logging, repeat [CTRL] + [P] key combo or type exit to kill the session. Note that the log file will only be populated once you either stop logging or exit the Tmux session.

If you forget to enable Tmux logging and are deep into a project, you can perform retroactive logging by typing [CTRL] + [B] and then hitting [Alt] + [Shift] + [P], and the entire pane will be saved. The amount of saved data depends on the Tmux history-limit or the number of lines kept in the Tmux scrollback buffer. If this is left at the default value and you try to perform retroactive logging, you will most likely lose data from earlier in the assessment. To safeguard against this situation, you can add the following lines to the .tmux.conf file:

set -g history-limit 50000

Another handy trick is the ability to take a screen capture of the current Tmux window or an individual pane. Say you are working with a split window, one with Responder and one with ntlmrelayx.py. If you attempt to copy/paste the output from one pane, you will grab data from the other pane along with it, which will look messy and require cleanup. You can avoid this by taking a screen capture as follows: [CTRL] + [B] followed by [Alt] + [P].

There are many other things you can do with Tmux, customizations you can do with Tmux logging. It is worth reading up on all the capabilities that Tmux offers and finding out how the tool best fits your workflow. Finally, here are some additional plugins that you might like:

  • tmux-sessionist - Gives you the ability to manipulate Tmux sessions from within a session: switching to another session, creating a new named session, killing a session without detaching Tmux, promote the current pane to a new session, and more.
  • tmux-pain-control - A plugin for controlling panes and providing more intuitive key bindings for moving around, resizing, and splitting panes.
  • tmux-resurrect - This extremely handy plugin allows you to restore your Tmux environment after your host restarts. Some features include restoring all sessions, windows, panes, and their order, restoring running programs in a pane, restoring Vim sessions, and more.

Artifacts Left Behind

At a minimum, you should be tracking when a payload was used, which host it was used on, what file path it was placed in on the target, and whether it was cleaned up or needs to be cleaned by the client. A file hash is also recommended for ease of searching on the client’s part. It’s best practice to provide this information even if you delete any web shells, payloads, or tools.

Account Creation / System Modifications

If you create accounts or modify system settings, it should be evident that you need to track those things in case you cannot revert them once the assessment is complete. Some examples include:

  • IP address of the host(s)/hostname(s) where the change was made
  • Timestamp of the change
  • Location on the host(s) where the change was made
  • Name of the application or service that was tampered with
  • Name of the account and perhaps the password in case you are required to surrender it

It should go without saying, but as a professional and to prevent creating enemies out of the infrastructure team, you should get written approval from the client before making these types of system modifications or doing any sort of testing that might cause an issue with system stability or availability. This can typically be ironed out during the project kickoff call to determine the treshold beyond which the client is willing to tolerate without being notified.

Evidence

No matter the assessment type, your client does not care about the cool exploit chains you pull off or how easily you “pwned” their network. Ultimately, they are paying for the report deliverable, which should clearly communicate the issues discovered and evidence that can be used for validation and reproduction. Without clear evidence, it can be challenging for internal security teams, sysadmins, devs, etc., to reproduce your work while working to implement a fix or even to understand the nature of the issue.

What to Capture

As you know, each finding will need to have evidence. It may also be prudent to collect evidence of tests that were performed that were unsuccessful in case the client questions your thoroughness. If you’re working on the command line, Tmux logs may be sufficient evidence to paste into the report as literal terminal output, but they can be horribly formatted. For this reason, capturing your terminal output for siginificant steps as you go along and tracking that separately alongside your findings is a good idea. For everything else, screenshots should be taken.

Storage

Much like with your notetaking, it’s a good idea to come up with a framework for how you organize the data collected during an assessment. This may seem like overkill on smaller assessments, but if you’re testing in a large environment and don’t have a structured way to keep track of things, you’re going to end up forgetting something, violating the rules of engagement, and probably doing things more than once which can be a huge time waster, especially during a time-boxed assessment. Below is a suggested baseline folder structure, but you may need to adapt it accordingly depending on the type of assessment you’re performing or unique circumstances.

  • Admin
    • Scope of Work (SoW) that you’re working off of, your notes from the project kickoff meeting, status reports, vulnerability notifications, etc
  • Deliverables
    • Folder for keeping your deliverables as you work through them. This will often be your report but can include other items such as supplemental spreadsheets and slide decks, depending on the specific client requirements
  • Evidence
    • Findings
      • It’s suggested creating a folder for each finding you plan to include in the report to keep your evidence for each finding in a container to make piecing the walkthrough together easier when you write the report.
    • Scans
      • Vuln scans
        • Export files from your vuln scanner for archiving
      • Service enum
        • Export files from tools you use to enumerate services in the target environment like Nmap, Masscan, Rumble, etc.
      • Web
        • Export files for tools such as ZAP or Burp state files, EyeWitness, Aquatone, etc.
      • AD enum
        • JSON files from Bloodhound, CSV files generated from PowerView or ADRecon, Ping Castle data, Snaffler log files, CME logs, data from Impacket tools, etc.
    • Notes
      • A folder to keep your notes in.
    • OSINT
      • Any OSINT output from tools like Intelx and Maltego that doesn’t fit well in your notes document.
    • Wireless
      • Optional if wireless testing is in scope, you can use this folder for output from wireless testing tools.
    • Logging output
      • Logging output from Tmux, Metasploit, and any other log output that does not fit the “Scan” subdirectories listed above.
    • Misc files
      • Web shells, payloads, custom scripts, and any other files generated during the assessment that are relevant to the project.
    • Retest
      • This is an optional folder if you need to return after the original assessment and retest the previously discovered findings. You may want to replicate the folder structure you used during the initial assessment in this directory to keep your retest evidence separate from your original evidence.

It’s a good idea to have scripts and tricks for setting up at the beginning of an assessment. You could take the following command to make your dirs and subdirs and adapt it further.

d41y@htb[/htb]$ mkdir -p ACME-IPT/{Admin,Deliverables,Evidence/{Findings,Scans/{Vuln,Service,Web,'AD Enumeration'},Notes,OSINT,Wireless,'Logging output','Misc Files'},Retest}

d41y@htb[/htb]$ tree ACME-IPT/

ACME-IPT/
├── Admin
├── Deliverables
├── Evidence
│   ├── Findings
│   ├── Logging output
│   ├── Misc Files
│   ├── Notes
│   ├── OSINT
│   ├── Scans
│   │   ├── AD Enumeration
│   │   ├── Service
│   │   ├── Vuln
│   │   └── Web
│   └── Wireless
└── Retest

Formatting and Redaction

Creds and Personal Identifiable Information (PII) should be redacted in screenshots and anything that would be morally objectionable, like graphic material or perhaps obscene comments and language. You may also consider the following:

  • Adding annotations to the image like arrows or boxes to draw attention the important items in the screenshot, particularly if a lot is happening in the image.
  • Adding a minimal border around the image to make it stand out against the white background of the document.
  • Cropping the image to only display the relevant information.
  • Inlcude the adress bar in the browser or some other information indicating what URL or host you’re connected to.
Screenshots

Wherever possible, you should try to use terminal output over screenshots of the terminal. It is easier to redact, highlight the important parts, typically looks neater in the document, and can avoid the document from becoming a massive, unwidely file if you have loads of findings. You should be careful not to alter terminal output since you want to give an exact representation of the command you ran and the result. It is OK to shorten/cut out the unnecessary output and mark the removed portion with <SNIP> but never alter output or add things that were not in the original command or output. Using text-based figures also makes it easier for the client to copy/paste to reproduce your results. It’s also important that the source material that you’re pasting from has all formatting stripped before going into your Word document. If you’re pasting text that has embedded formatting, you may end up pasting non UTF-8 encoded chars into your commands, which may actually cause the command to not work correctly when the client tries to repdroduce it.

One common way of redacting screenshots is through pixelation or blurring using a tool such as Greenshot. Research has shown that this method is not foolproof, and there’s a high likelihood that the original data could be recovered by reversing the pixelation/blurring technique. This can be done with a tool such as Unredacter. Instead, you should avoid this technique and use black bars over the text you would like to redact. You should edit the image directly and not just apply a shape in MS Word, as someone with access to the document could easily delete this. As an aside, if you are writing a blog post or something on the web with redacted sensitive data, do not rely on HTML/CSS styling to attempt to obscure the text as this can easily be viewed by highlighting the text or editing the page source temporarily. When in doubt, use console output but if you must use a terminal screenshot, then make sure you are appropriately redacting information.

Terminal

Typically the only thing that needs to be redacted from terminal output is credentials. This includes password hashes. For password hashes, you can usually just strip out the middle of them and leave the first and last 3 or 4 chars to show there was actually a hash there. For cleartext creds or any other human-readable content that needs to be obfuscated, you can just replace it with a <REDACTED> or <PASSWORD REDACTED> placeholder, or similar.

You should also consider color-coded highlighting in your terminal output to highlight the command that was run and the interesting output from that command. This enhances the reader’s ability to identify essential parts of the evidence and what to look for if they try to reproduce it on their own. If you’re working on a complex web payload, it can be difficult to pick out the payload in a gigantic URL-encoded request wall of text if you don’t do this for this for a living. You should take all opportunities to make the report clearer to your readers, who will often not have as deep an understanding of the environment as you do by the end of the assessment.

What Not to Archive

When starting a pentest, you are being trusted by your customers to enter their network and “do no harm” wherever possible. This means not bringing down any hosts or affecting the availability of applications, not changing passwords, making significant or difficult-to-reverse configuration changes, or viewing or removing certain types of data from the environment. This data may include unredacted PII, potentially criminal info, anything considered legally “discoverable”, etc. For example, if you gain access to a network share with sensitive data, it’s probably best to just screenshot the directory with the files in it rather than opening individual files and screenshotting the file contents. If the files are as sensitive as you think, they’ll get the message and know what’s in them based on the file name. Collecting actual PII and extracting it from the target environment may have significant compliance obligations for storing and processing that data like GDPR and the like and could open up a slew of issues for your company and you.

Types of Reports

Differences Across Assessment Types

Vulnerability Assessment

Vulnerability assessments involve running an automated scan of an environment to enumerate vulnerabilities. These can be authenticated or unauthenticated. No exploitation is attempted, but you will often look to validate scanner results so your report may show a client which scanner results are actual issues and which are false positives. Validation may consist of performing an additional check to confirm a vulnerable version is in use or a setting/misconfig is in place, but the goal is not to gain a foothold and move laterally/vertically. Some customers will even ask for scan results with no validation.

Internal vs External

An external scan is performed from the perspective of an anonymous user on the internet targeting the organization’s public systems. An internal scan is conducted from the perspective of a scanner on the internal network and investigates hosts from behind the firewall. This can be done from the perspective of an anonymous user on the corporate user network, emulating a compromised server, or any number of different scenarios. A customer may even ask for an internal scan to be conducted with credentials, which can lead to considerably more scanner findings to sift through but will also produce more accurate and less generic results.

Report Contents

These reports typically focus on themes that can be observed in the scan results and highlight the number of vulns and their severity levels. These scans can produce a LOT of data, so identifying patterns and mapping them to procedural deficiencies is important to prevent the information from becoming overwhelming.

Pentesting

Pentesting goes beyond automated scans and can leverage vulnerability scan data to help guide exploitation. Like vulnerability scans, these can be performed from an internal or external perspective. Depending on the type of pentest, you may not perform any kind of vulnerability scanning at all.

A pentest may be performed from various perspectives, such as “black box”, where you have no more information than the name of the company during an external or a network connection for an internal, “grey box” where you are given just in-scope IP addresses/CIDR network ranges, or “white box” where you may be given creds, source code, configurations, and more. Testing can be performed with zero evasion to attempt to uncover as many vulns as possible, from a hybrid evasive standpoint to test the customer’s defenses by starting out evasive and gradually becoming “noisier” to see at what level internal security teams/monitoring tools detect and block you. Typically once you are detected in this type of assessment, the client will ask you to move to non-evasive testing for the remainder of the assessment. This is a great assessment type to recommend to clients with some defenses in place but not a highly mature defensive security posture. It can help to show gaps in their defenses and where they should concentrate efforts on enhancing their detection and prevention rules. For more mature clients, this type of assessment can be a great test of their defenses and internal procedures to ensure that all parties perform their roles properly in the event of an actual attack.

Finally, you may be asked to perform evasive testing throughout the assessment. In this type of assessment, you will try to remain undetected for as long as possible and see what kind of access, if any, you can obtain while working stealthily. This can help to simulate a more advanced attacker. However, this type of assessment is often limited by time constraints that are not in place for a real-world attacker. A client may also opt for a longer-term adversary simulation that may occur over multiple months, with few company staff aware of the assessment and few or no client staff knowing the exact start day/time of the assessment. This assessment type is well-suited for more security mature organizations and requires a bit of a different skill set than a traditional network/application pentester.

Internal vs External

Similar to vulnerability scanning perspectives, external pentesting will typically be conducted from the perspective of an anonymous attacker on the internet. It may leverage OSINT data/publicly available information to attempt to gain access to sensitive data via applications or the internal network by attacking internet-facing hosts. Internal pentesting may be conducted as an anonymous user on the internal network or as an authenticated user. It is typically conducted to find as many flaws as possible to obtain a foothold, perform horizontal and vertical privesc, move laterally, and compromise the internal network.

Inter-Disciplinary Assessments

Some assessments may require involvement from people with diverse skillsets that complement one another. While logistically more complex, these tend to organically be more collaborative in nature between the consulting team and the client, which adds tremendous value to the assessment and trust in relationship. Some examples of these types of assessments include:

  • Purple Team Style
  • Cloud Focused Pentesting
  • Comprehensive IoT

Web Application Pentesting

Depending on the scope, this type of assessment may also be considered an inter-disciplinary assessment. Some application assessments may only focus on identifying and validating the vulnerabilities in an application with role-based, authenticated testing with no interest in evaluating the underlying server. Others may want to test both the application and the infrastructure with the intent of initial compromise being through the web application itself and then attempting to move beyond the application to see what other hosts and systems behind it exist that can be compromised. The latter type of assessment would benefit from someone with a development and application testing background for initial compromise and then perhaps a network-focused pentester to “live off the land” and move around or escalate privileges through AD or some other means beyond the application itself.

Hardware Pentesting

This type of testing is often done on IoT-type devices but can be extended to testing the physical security of a laptop shipped by the client or an onsite kiosk or ATM. Each client will have different comfort level with the depth of testing here, so it’s vital to establish the rules of engagement before the assessment begins, particularly when it comes to destructive testing. If the client expects their device back in one piece and functioning, it is likely inadvisable to try desoldering chips from the motherboard or similar attacks.

Draft Report

It is becoming more commonplace for clients to expect to have a dialogue and incorporate their feedback into a report. This may come in many forms, whether they want to add comments on how they plan to address each finding, tweak potentially inflammatory language, or move things around to where it suits their needs better. For these reasons, it’s best to plan on submitting a draft report first, giving the client time to review it on their own, and then offering a time slot where they can review it with you to ask questions, get clarification, or explain what they would like to see. The client is paying for the report deliverable in the end, and you must ensure it is as thorough and valuable to them as possible. Some will not comment on the report at all, while others will ask for significant changes/additions to help it suit their needs, whether it be to make it presentable to their board of directors for additional funding to use the report as an input to their security roadmap for performing remediation and hardening their security posture.

Final Report

Typically, after reviewing the report with the client and confirming that they are satisfied with it, you can issue the final report with any necessary modifications. This may seem like a frivolous process, but several auditing firms will not accept a draft report to fulfill their compliance obligations, so it’s important from the client’s perspective.

Post-Remediation Report

It is also common for a client to request that the findings you discovered during the original assessment be tested again again after they’ve had an opportunity to correct them. This is all but required for organizations beholden to a compliance standard such as PCI. You should not be redoing the entire assessment for this phase of the assessment. But instead, you should be focusing on retesting only the findings and only the hosts affected by those findings from the original assessment. You also want to ensure that there is a time limit on how long after the initial assessment you perform remediation testing. Here are some of the things that might happen if you don’t.

  • The client asks you to test their remediation several months or even a year or more later, and the environment has changed so much that it’s impossible to get an “apples to apples” comparison.
  • If you check the entire environment for new hosts affected by a given finding, you may discover new hosts that are affected and fall into an endless loop of remediation testing the new hosts you discovered last time.
  • If you run new large-scale scans like vulnerability scans, you will likely find stuff that wasn’t there before, and your scope will quickly get out of control.
  • If a client has a problem with the “snapshot” nature of this type of testing, you could recommend a Breach and Attack Simulation (BAS) type tool to periodically run those scenarios to ensure they do not continue popping up.

If any of these situations occur, you should expect more scrutiny around severity levels and perhaps pressure to modify things that should not be modified to help them out. In these situations, your response should be carefully crafted to be both clear that you’re not going to cross ethical boundaries, but also commiserate with their situation and offer some ways out of it for them. This allows you to keep your integrity intact, fosters the feeling with the client that you sincerely care about their plight, and gives them a path forward without having to turn themselves inside out to make it happen.

One approach could be to treat this as a new assessment in these situations. If the client is unwilling, then you would likely want to retest just the findings from the original report and carefully note in the report the length of time that has passed since the original assessment, that this is a point in time check to assess whether ONLY the previously reported vulns affect the originally reported host or hosts and that it’s likely the client’s environment has changed significantly, and a new assessment was not performed.

In terms of report layout, some folks may prefer to update the original assessment by tagging affected hosts in each finding with a status, while others may prefer to issue a new report entirely that has some additional comparison content and an updated executive summary.

Attestation Report

Some clients will request an Attestation Letter or Attestation Report that is suitable for their vendors or customers who require evidence that they’ve had a pentest done. The most significant difference is that your client will not want to hand over the specific technical details of all of the findings or credentials or other secret information that may be included to a third party. This document can be derived from the report. It should focus only on the number of findings discovered, the approach taken, and general comments about the environment itself. This document should likely only be a page or two long.

Other Deliverables

Slide Deck

You may also be requested to prepare a presentation that can be given at several different levels. Your audience may be technical, or they may be more executive. The language and focus should be as different in your executive presentation as the executive summary is from the technical finding details in your report. Only including graphs and numbers will put your audience to sleep, so it’s best to be prepared with some anecdotes from your own experience or perhaps some recent current events that correlate to a specific attack vector or compromise. Bonus points if said story is in the same industry as your client. The purpose of this is not fear-mongering, and you should be careful not to present it that way, but it will help hold your audience’s attention. It will make the risk relatable enough to maximize their chances of doing something about it.

Spreadsheet of Findings

The spreadsheet of findings should be pretty self-explanatory. This is all of the fields in the findings of your report, just in a tabular layout that the client can use for easier sorting and other data manipulation. This may also assist them with importing those findings into a ticketing system for internal tracking purposes. This document should not inlcude your executive summary or narratives. Ideally, learn how to use pivot tables and use them to create some interesting analytics that the client might find interesting. The most helpful objective in doing this is sorting findings by severity or category to help prioritize remediation.

Vulnerability Notifications

Sometimes during an assessment, you will uncover a critical flaw that requires you to stop work and inform your clients of an issue so they can decide if they would like to issue an emergency fix or wait until after the assessment is over.

When to draft one

At a minimum, this should be done for any finding that is directly exploitable that is exposed to the internet and results in unauthenticated remote code execution or sensitive data exposure, or leverage weak/default credentials for the same. Beyond that, expectations should be set for this during the project kickoff process. Some clients may want all high and critical findings reported out-of-band regardless of whether they’re internal or external. Some folks may need mediums as well. It’s usually best to set a baseline for yourself, tell the client what to expect, and let them ask for modifications to the process if they need them.

Contents

Due to the nature of these notifications, it’s important to limit the amount of fluff in these documents so the technical folks can get right to the details and begin fixing the issue. For this reason, it’s probably best to limit this to the typical content you have in the technical details of your findings and provide tools-based evidence for the finding that the client can quickly reproduce if needed.

Components of a Report

Prioritizing Your Efforts

During an assessment, especially large ones, you’ll be faced with a lot of “noise” that you need to filter out to best focus your efforts and prioritize findings. As testers, you are required to disclose everything you find, but when there is a ton of information coming at you through scans and enumeration, it is easy to get lost or focus on the wrong things and waste time and potentially miss high-impact issues. This is why it is essential that you understand the output that your tools produce, have repeatable steps, to sift through all of this data, process it, and remove false positives or informational issues that could distract you from the goal of the assessment. Experience and a repeatable process are key so that you can sift through all of your data and focus your efforts on high-impact findings such as RCE flaws or others that may lead to sensitive data disclosure. It is worth it to report informational findings, but instead of spending the majority of your time validating these minor, non-exploitable issues, you may want to consider consolidating some of them into categories that show the client you were aware that the issues existed, but you were unable to exploit them in any meaningful way.

When starting in pentesting, it can be difficult to know what to prioritize, and you may fall down rabbit holes trying to exploit a flaw that doesn’t exist or getting a broken PoC exploit to work. Time and experience help here, but you should also lean on senior team members and mentors to help. Something that you may waste half a day on could be something that they have seen many times and could tell you quickly whether it is a false positive or worth running down. Even if they can’t give you a really quick black and white answer, they can at least point you in a direction that saves you several hours. Surround yourself with people you’re comfortable with asking for help that won’t make you feel like an idiot if you don’t know all the answers.

Writing an Attack Chain

The attack chain is your chance to show off the cool exploitation chain you took to gain a foothold, move laterally, and compromise the domain. It can be a helpful mechanism to help the reader connect the dots when multiple findings are used in conjunction with each other and gain a better understanding of why certain findings are given the severity rating that they are assigned. For example, a particular finding on its own may be medium-risk but, combined with one or two other issues, could elevate to high-risk, and this section is your chance to demonstrate that. A common example is using Responder to intercept NBT-NS/LLMNR traffinc and relaying it to hosts where SMB signing is not present. It can get really interesting if some findings can be incorporated that might otherwise seem inconsequential, like using an information disclosure of some sort to help guide you through an LFI to read an interesting configuration file, log in to an external-facing application, and leverage functionality to gain remote code execution and a foothold inside the internal network.

There are multiple ways to present this, and your style may differ. EXAMPLE: You will start with a summary of the attack chain and then walk through each step with supporting command output and screenshots to show the attack chain as clearly as possible. A bonus here is that you can re-use this as evidence for your individual findings so you don’t have to format things twice and can copy/paste them into the relevant finding.

Writing a Strong Executive Summary

The Executive Summary is one of the most important parts of the report. Your clients are ultimately paying for the report deliverable which has several purposes aside from showing weaknesses and reproduction steps that can be used by technical teams working on remediation. The report will likely be viewed in some part by other internal stakeholders such as Internal Audit, IT and IT Security management, C-level management, and even the Board of Directors. The report may be used to either validate funding from the prior year for infosec or to request additional funding for the following year. For this reason, you need to ensure that there is content in the report that can be easily understood by people without technical knowledge.

Key Concepts

The intended audience for the Executive Summary is typically the person that is going to be responsible for allocating the budget to fixing the issues you discovered. For better or worse, some of your clients have likely been trying to get funding to fix the issues presented in the report for years and fully intend to use the report as ammunition to finally get some stuff done. This is your best chance to help them out. If you lose your audience here and there are budgetary limitations, the rest of the report can quickly become worthless. Some key things to assume to maximize the effectiveness of the Executive Summary are:

  • It should be obvious, but this should be written for someone who isn’t technical at all. The typical barometer for this is “if your parents can’t understand what the point is, then you need to try again”.
  • The reader doesn’t do this every day. They don’t know what Rubeus does, what password spraying means, or how it’s possible that tickets can grant different tickets.
  • This may be the first time they’ve ever been through a pentest.
  • Much like the rest of the world in the instant gratification age, their attention span is small. When you lose it, you are extraordinarily unlikely to get it back.
  • Along the same lines, no one likes to read something where they have to Google what things mean. Those are called distractions.
Do
  • When talking about metrics, be as specific as possible.
  • It’s a summary. Keep it that way.
  • Describe the types of things you managed to access.
  • Describe the general things that need to improve to mitigate the risks you discovered.
  • If you’re feeling brave and have a decent amount of experience on both sides, provide a general expectation for how much effort will be necessary to fix some of this.
Do Not
  • Name or recommend specific vendors.
  • Use Acronyms.
  • Spend more time talking about stuff that doesn’t matter than you do about the significant findings in the report.
  • Use words that no one has ever heard of before.
  • Reference a more technical section of the report.
Anatomy of the Executive Summary

The first thing you’ll likely want to do is get a list of your findings together and try categorizing the nature of the risk of each one. These categories will be the foundation for what you’re going to discuss in the executive summary.

Summary of Recommendations

Before you get into the technical findings, it’s a good idea to provide a Summary of Recommendations or Remediation Summary. Here you can list your short, medium, and long-term recommendations based on your findings and the current state of the client’s environment. You’ll need to use your experience and knowledge of the client’s business, security budget, staffing considerations, etc., to make accurate recommendations. Your clients will often have input on this section, so you want to get it right, or the recommendations are useless. If you structure this properly, your clients can use it as the basis for a remediation roadmap. If you opt not to do this, be prepared for clients to ask you to prioritize remediation for them. It may not happen all the time, but if you have a report with 15 high-risk findings and nothing else, they’re likely going to want to know which of them is “the most high”.

You should tie each recommendation back to a specific finding and not include any short or medium-term recommendations that are not actionable by remediating findings reported later in the report. Long-term recommendations may map back to informational/best practice recommendations such as “Create baseline security templates for Windows Server and Workstation hosts” but may also be catch-all recommendations such as “Perform periodic Social Engineering engagements with follow-on debriefings and security awareness training to build a security-focused culture within the organization from the top down.”.

Some findings could have an associated short and long-term recommendation. For example, if a particular patch is missing in some places, that is a sign that the organization struggles with patch management and perhaps does not have a strong patch management program, along with associated policies and procedures. The short-term solution would be to push out the relevant patches, while the long-term objective would be to review patch and vulnerability management processes to address any gaps that would prevent the same issue from cropping up again. In the application security world, it might instead be fixing the code in the short term and in the long term, reviewing the SDLC to ensure security is considered early enough in the development process to prevent issues from making it into production.

Findings

After the Executive Summary, the Findings section is one of the most important. This section gives you a chance to show off your work, paint the client a picture of the risk to their environment, give technical teams the evidence to validate and reproduce issues and provide remediation advice.

Appendices

There are appendices that should appear in every report, but others will be dynamic and may not be necessary for all reports. If any of these appendices bloat the size of the report unnecessarily, you may want to consider whether a supplemental spreadsheet would be a better way to present the data.

Static Appendices

Scope

Shows the scope of the assessment. Most auditors that the client has to hand your report to will need to see this.

Methodology

Explain the repeatable process you follow to ensure that your assessments are thorough and consistent.

Severity Ratings

If your severity ratings don’t directly map to a CVSS score or something similar, you will need to articulate the criteria necessary to meet your severity definitions. You will have to defend this occasionally, so make sure it is sound and can be backed up with logic and that the findings you inlcude in your report are rated accordingly.

Biographies

If you perform assessments with the intent of fulfilling PCI compliance specifically, the report should inlcude a bio about the personnel performing the assessment with the specific goal of articulating that the consultant is adequately qualified to perform the assessment. Even without compliance obligations, it can help give the client peace of mind that the person doing their assessment knew what they were doing.

Dynamic Appendices

Exploitation Attempts and Payloads

If you’ve ever done anything in incident response, you should know how many artifacts are left behind after a pentest for the forensics guys to try and sift through. Be respectful and keep track of the stuff you did so that if they experience an incident, they can differentiate what was you versus an actual attacker. If you generate custom payloads, particularly if you drop them on disk, you should also inlcude the details of those payloads here, so the client knows exactly where to go and what to look for to get rid of them. This is especially important for the payloads that you cannot clean up yourself.

Compromised Credentials

If a large number of accounts were compromised, it is helpful to list them here so that the client can take action against them if necessary.

Configuration Changes

If you made any configuration changes in the client environment, you should itemize all of them so that the client can revert them and elimante any risks you introduced into the environment. Obviously, it’s ideal if you put things back the way you found them yourself and get approval in writing from the client to change things to prevent getting yelled at later on if your change has unintended consequences for a revenue-generating process.

Additional Affected Scope

If you have a finding with a list of affected hosts that would be too much to include with the finding itself, you can usually reference an appendix in the finding to see a complete list of the affected hosts where you can create a table to display them in multiple columns. This helps keep the report clean instead of having a bulleted list several pages long.

Information Gathering

If the assessment is an External Pentest, you may include additional data to help the client understand their external footprint. This could include whois data, domain ownership information, subdomains, discovered emails, accounts found in public breach data, an analysis of the client’s SSL/TLS configurations, and even a listing of externally accessible ports/services. This data can be beneficial in a low-to-no-finding report but should convey some sort of value to the client and not just be “fluff”.

Domain Password Analysis

If you’re able to gain Domain Admin access and dump the NTDS database, it’s a good idea to run this through Hashcat with multiple wordlists and rules and even brute-force NTLM up through eight characters if your password cracking rig is powerful enough. Once you’ve exhausted your cracking attempts, a tool such as DPAT can be used to produce a nice report with various statistics. You may want to include some key stats from this report. This can help drive home themes in the Executive Summary and Findings sections regarding weak passwords. You may also wish to provide the client with the entire DPAT report as supplementary data.

Reporting

How to Write Up a Finding

The Findings section of your report is the “meat”. This is where you get to show off what you found, how you exploited them, and give the client guidance on how to remediate the issues. The more detail you can put into each finding, the better. This will help technical teams reproducde the finding on their own and then be able to test that their fix worked. Being detailed in this section will also help whoever is tasked with the post-remediation assessment if the client contracts your firm to perform it. While you’ll often have “stock” findings in some sort of database, it’s essential to tweak them to fit your client’s environment to ensure you aren’t mispresenting anything.

Breakdown of a Finding

Each finding should have the same general type of information that should be customized to your client’s specific circumstances. If a finding is written to suit several different scenarios or protocols, the final version should be adjusted to only reference the particular circumstances you identified. “Default Credentials” could have different meanings for risk if it affects a DeskJet printer versus the building’s HVAC control or another high-impact web application. At a minimum, the following information should be included for each finding:

  • Description of the finding and what platform(s) the vuln affects
  • Impact if the finding is left unresolved
  • Affected systems, networks, environments, or applications
  • Recommendation for how to address the problem
  • Reference links with additional information about the finding and resolving it
  • Steps to reproduce the issue and the evidence that you collected

Some additional, optional fields include:

  • CVE
  • OWASP, MITRE IDs
  • CVSS or similar score
  • Ease of exploitation and probability of attack
  • Any other information that might help learn about and mitigate the attack

Showing Finding Reproduction Steps Adequately

As mentioned in the previous section regarding the Executive Summary, it’s important to remeber that even though your point-of-conract might be reasonable technical, if they don’t have a background specifically in pentesting, there is a pretty decent chance they won’t have any idea what they’re looking at. They may have never even heard of the tool you used to exploit this vuln, much less understand what’s important in the wall of text it spits out when the command runs. For this reason, it’s crucial to guard yourself against taking things for granted and assuming people know how to fill in the blanks themselves. If you don’t do this correctly, this will erode the effectiveness of your deliverable, but this time in the eyes of your technical audience. Some concepts to consider:

  • Break each step into its own figure. If you perform multiple steps in the same figure, a reader unfamiliar with the tools being used may not understand what is taking place, much less have an idea of how to reproduce it themselves.
  • If setup is required, capture the full configuration so the reader can see what the exploit config should look like before running the exploit. Create a second figure that shows what happens when you run the exploit.
  • Write a narrative between figures describing what is happening and what is going through your head at this point in the assessment. Do not try to explaint what is happening in the figure with the caption and have a bunch of consecutive figures.
  • After walking through your demonstration using your preferred toolkit, offer alternative tools that can be used to validate the finding if they exist.

Your primary objective should be to present evidence in a way that is understandable and actionable to the client. Think about how the client will use the information you’re presenting. If you’re showing a vuln in a web application, a screenshot of Burp isn’t the best way to present this information if you’re crafting your own web requests. The client will probably want to copy/paste the payload from your testing to recreate it, and they can’t do that if it’s just a screenshot.

Another critical thing to consider is whether your evidence is completely and utterly defensible. For example, if you’re trying to demonstrate that information is being transmitted in clear text because of the use of basic authentication in a web application, it’s insufficient just to screenshot the login prompt popup. That shows that basic auth is in place but offers no proof that information is being transmitted in the clear. In this instance, showing the login prompt with some fake credentials entered into it, and the clear text credentials in a Wireshark packet capture of the human-readable authentication request leaves no room for debate. Similarly, if you’re trying to demonstrate the presence of a vuln in a particular web application or something else with a GUI, it’s important to capture either the URL in the address bar or output from an ifconfig or ipconfig command to prove that it’s on the client’s host and not some random image you downloaded from Google. Also, if you’re screenshotting your browser, turn your bookmarks bar off and disable any unprofessional extensions or dedicate a specific web browser to your testing.

Effective Remediation Recommendations

Example
  • Bad: Reconfigure your registry settings to harden against X.
  • Good: To fully remediate this finding, the following registry hives should be updated with the specific values. Note that changes to critical components like the registry should be approached with caution and tested in a small group prior to making large-scale changes.
    • [list the full path to the actual registry hive]
      • Change value X to value Y
Rationale

While the “bad” example is at least somewhat helpful, it’s fairly lazy, and you’re squandering a learning opportunity. Once again, the reader of this report may not have the depth of experience in Windows as you, and giving them a recommendation that will require hours’ worth of work for them to figure out how to do it is only going to frustrate them. Do your homework and be as specific as reasonably possible. Doing so has the following benefits:

  • You learn more this way and will be much more comfortable answering questions during the report review. This will reinforce the client’s confidence in you and will be knowledge that you can leverage on future assessments and to help level up your team.
  • The client will appreciate you doing the research for them and outlining specifically what needs to be done so they can be as efficient as possible. This will increase the likelihood that they will ask you to do future assessments and recommend you and your team to their friends.

It’s also worth drawing attention to the fact the “good” example includes a warning that changing something as important as the registry carries its own set of risks and should be performed with caution. Again, this indicates to the client that you have their best interests in mind an genuinely want them to succeed. For better or worse, there will be clients that will blindly to whatever you tell them to and will not hesitate to try and hold you accountable if doing so ends up breaking something.

Selecting Quality References

Each finding should include one or more external references for further reading on a particular vuln or misconfig. Some criteria that enhances the usefulness or a reference:

  • A vendor-agnostic source is helpful. Obviously, if you find a ASA vuln, a Cisco reference link makes sense, but you shouldn’t lean on them for a writeup on anything outside of networking. If you reference an article written by a product vendor, chances are the article’s focus will be telling the reader how their product can help when all the reader wants is to know how to fix it themselves.

A thorough wakthrough or explanation of the finding and any recommended workarounds or mitigations is preferable. Don’t choose articles behind a paywall or something where you only get part of what you need without paying.

  • Use articles that get to the point quickly. This isn’t a recipe website, and no one cares how often your grandmother used to make those cookies. You have problems to solve, and making someone dig through the entire NIST 800-53 document or an RFC is mor annoying than helpful.
  • Choose sources that have clean websites and don’t make you feel like a bunch of crypto miners that are running in the background or ads pop up everywhere.
  • If possible, write some of your own source material and blog about it. The research will aid you in explaining the impact of the finding to your clients, and while the infosec community is pretty helpful, it’d be preferable not to send your clients to a competitor’s website.

Reporting Tips and Tricks

Templates

It’s best to have a blank report template for every assessment type you perform. If you are not using a reporting tool and just working in old-fashioned MS Word, you can always build a report template with macros and placeholders to fill in some of the data points you fill out for every assessment. You should work with blank templates every time and not just modify a report from a previous client, as you could risk leaving another client’s name in the report or other data that does not match your current environment. This type of error makes you look amateur and is easily avoidable.

MS Word Tips & Tricks

Microsoft Word can be a pain to work with, but there are several ways you can make it work for you to make your lives easier and it’s easily the least of the available evils. Here are few tips & tricks to becoming an MS Word guru.

  • Font Styles: You should be getting as close as you possibly can to a document without and “direct formatting” in it. Direct formatting is highlighting text and clicking the button to make it bold, italics, underlined, colored, highlighted, etc. If you use font styles and you find that you’ve overlooked a setting in one of your headings that messes up the placement or how it looks, if you update the style itself, it updates “all” instances of that style used in the entire document instead of you having to go manually update all 45 times you used your random heading.
  • Table Styles: Apply the same to tables. Same concept here. It makes global changes much easier and promotes consistency throughout the report. It also generally makes everyone using the document less miserable, both as an author and as QA.
  • Captions: Use the built-in capability if you’re putting captions on things. Using this functionality will cause the captions to renumber themselves if you have to add or remove something from the report, which is a GIGANTIC headache. This typically has a built-in font style that allows you to control how the captions look.
  • Page numbers: Page numbers make it much easier to refer to specific areas of the document when collaborating with the client to answer questions or clarify the report’s content. It’s the same for clients working internally with their teams to address the findings.
  • TOC: A Table of Contents is a standard component of a professional report. The default TOC is probably fine, but if you want something custom, like hiding page numbers or changing the tab leader, you can select a custom TOC and tinker with the settings.
  • List of Figures/Tables: It’s debatable whether a List of Figures or Tables should be put in the report. This is the same concept as a TOC, but it only lists the figures or tables in the report. These trigger off captions, so if you’re not using captions on one or the other, or both, this won’t work.
  • Bookmarks: Bookmarks are most commonly used to designate places in the document that you can create hyperlinks to. If you plan on using macros to combine templates, you can also use bookmarks to designate entire sections that can be automatically removed from the report.
  • Custom Dictionary: You can think of a custom dictionary as an extension of Word’s built-in AutoCorrect feature. If you find yourself misspelling the same words every time you write a report or want to prevent embarrasing typos, you can add these words to a custom dictionary, and Word will automatically replace them for you. Unfortunately, this feature does not follow the template around, so people will have to configure their own.
  • Language Settings: The primary thing you want to use custom language settings for is most likely to apply it to the font style you created for your code/terminal/text-based evidence. You can select the option to ignore spelling and grammer checking within the language settings for this font style. This is helpful because after you build a report with a bunch of figures in it and you want to run the spell checker tool, you don’t have to click ignore a billion times to skip all the stuff in your figures.
  • Custom Bullet/Numbering: You can set up custom numbering to automatically number things like your findings, appendices, and anything else that might benefit from automatic numbering.
  • Quick Access Toolbar Setup: There are many options and functions you can add to your Quick Access Toolbar that you should peruse at your leisure to determine how useful they will be for your workflow.
    • Back
    • Undo/Redo
    • Save
  • Useful Hotkeys: [F4] will apply the last action you took again. For example, if you highlight some text and apply a font style to it, you can highlight something else to which you want to apply the same font style and just hit [F4], which will do the same thing. If you’re using a TOC and lists of figures and tables, you can hit [Ctrl+A] to select all and [F9] to update all of them simultaneously. This will also update any other “fields” in the document and sometimes does not work as planned, so use it at your own risk. A more commonly known own is [Ctrl+S] to save. You should be doing it often in case Word crashes, so you don’t lose data. If you need to look at two different areas of the report simultaneously and don’t want to scroll back and forth, you can use [Crtl+Alt+S] to split the window into two panes. This may seem like a silly one, but if you accidentally hit your keyboard and you have no idea where your cursor is, you can hit [Shift+F5] to move the cursor to where the last revision was made.

Automation

When developing report templates, you may get to a point where you have a reasonably mature document but not enough time or budget to acquire an automated reporting platform. A lot of automation can be gained through macros in MS Word documents. You will need to save your templates as .dotm files, and you will need to be in a Windows environment to get the most out of this. Some of the most common things you can do with macros are:

  • Create a macro that will throw a pop-up for you to enter key pieces of information that will then get automatically inserted into the report template where designated placeholder variables are:
    • Client name
    • Dates
    • Scope details
    • Type of testing
    • Environment or application names
  • You can combine different report templates into a single document and have a macro go through and remove entire sections that don’t belong in a particular assessment type.
    • This eases the task of maintaining your templates since you only have to maintain one instead of many
  • You may also be able to automate quality assurance tasks by correcting errors made often. Given that writing Word macros is basically a programming language on its own, it’s left to you to use online resources to learn how to accomplish these tasks.

Reporting Tools/Findings Database

Once you do several assessments, you’ll start to notice that many of the environments you target are afflicted by the same problems. If you do not have a database of findings, you’ll waste a tremendous amount of time rewriting the same content repeatedly, and you risk introducing inconsistencies in your recommendations and how thoroughly or clearly you describe the finding itself. If you multiply these issues by an entire team, the quality of your reports will vary wildly from one consultant to the next. At a minimum, you should maintain a dedicated document with sanitized versions of your findings that can copy/paste into your reports. You should constantly strive to customize findings to a client environment whenever it makes sense but having templated findings saves a ton of time.

However, it is time well spent to investigate and configure one of the available platforms designed for this purpose. Some are free, and some must be paid for, but they will most likely pay for themselves quickly in the amount of time and headache you save if you can afford the initial investment.

Misc/Tricks

  • Aim to tell a story with your report. Why does it matter that you could perform Kerberoasting and crack a hash?
  • Write as you go. Don’t leave reporting until the end. Your report does not seed to be perfect as you test but documenting as much as you can as clearly as you can during testing will help you be as comprehensive as possible and not miss things or cut corners while rushing on the last day of the testing window.
  • Stay organized. Keep things in chronological order, so working with your notes is easier. Make your notes clear and easy to navigate, so they provide value and don’t cause you extra work.
  • Show as much evidence as possible while not being overly verbose. Show enough screenshots/command output to clearly demonstrate and reproduce issues but do not add loads of extra screenshots or unecessary command output that will clutter up the report.
  • Clearly show what is being presented in screenshots. Use a tool such as Greenshot to add arrows/colored boxes to screenshots and add explanations under the screenshot if needed. A screenshot is useless if your audience has to guess what you’re trying to show with it.
  • Redact sensitive data wherever possible. This includes cleartext passwords, password hashes, other secrets, and any data that could be deemed sensitive to your clients. Reports may be sent around a company and even to third parties, so you want to ensure you’ve done your due diligence not to include any data in the report that could be misused. A tool such as Greenshot can be used to obfuscate parts of a screenshot (NO BLURRING!).
  • Redact tool output wherever possible to remove elements that non-hackers may construe as unprofessional. In CME’s case, you can change that value in your config file to print something else to the screen, so you don’t have to change it in your report every time. Other tools may have similar customization.
  • Check your Hashcat output to ensure that none of the candidate passwords is anything crude. Many wordlists will have words that can be considered crude/offensive, and if any of these are present in the Hashcat output, change them to something innocuous.
  • Check grammer, spelling, and formatting, ensure font and font sizes are consistent and spell out acronyms the first time you use them in a report.
  • Make sure screenshots are clear and do not capture extra parts of the screen that bloat their size. If your report is difficult to interpret due to poor formatting or the grammar and spelling are a mess, it will detract from the technical results of the assessment. Consider a tool such as Grammarly or LanguageTool, which is much more powerful than Microsoft Word’s built-in spelling and grammer check.
  • Use raw command output where possible, but when you need to screenshot a console, make sure it’s not transparent and showing your background/other tools. The console should be solid black with a reasonable theme. Your client may print the report, so you may want to consider a light background with dark text, so you don’t demolish their printer cartrigde.
  • Keep your hostname and username professional. Don’t show screenshots with a prompt like azzkicker@clientsmasher.
  • Establish a QA process. Your report should go through at least one, but preferably two rounds of QA. You should never review your own work and want to put together the best possible deliverable, so pay attention to the QA process. At a minimum, if you’re independent, you should sleep on it for a night and review it again. Stepping away from the report for a while can sometimes help you see things you overlook after staring at it for a long time.
  • Establish a style guide and stick to it, so everyone on your team follows a similar format and reports look consistent across all assessments.
  • Use autosave with your notetaking tool and MS Word. You don’t want to lose hours of work because a program crashes. Also, backup your notes and other data as you go, and don’t store everything on a single VM. VMs can fail, so you should move evidence to a secondary location as you go. This is a task that can and should be automated.
  • Script and automate wherever possible. This will ensure your work is consistent across all assessments you perform, and you don’t waste time on tasks repeated on every assessment.

Client Communication

Strong written and verbal communication skills are paramount for anyone in a pentesting role. During your engagements, you must remain in constant contact with your clients and serve appropriately in your role as trusted advisors. They are hiring your company and paying a lot of money for you to identify issues in their networks, give remediation advice, and also to educate their staff on the issue you find through your report deliverable. At the start of every engagement, you should send a start notification email including information such as:

  • Tester name
  • Description of the type/scope of the engagement
  • Source IP address for testing
  • Dates anticipate for testing
  • Primary and secondary contact information (email and phone)

At the end of each day, you should send a stop notification to signal the end of testing. This can be a good time to give a high-level summary of findings so the report does not entirely blindside the client. You can also reiterate expectations for report delivery at this time. You should, of course, be working on the report as you go and not leave it 100% to the last minute, but it can take a few days to write up the entire attack chain, executive summary, findings, recommendations, and perform self-QA checks. After this, the report should go through at least one round of internal QA, which can take some time.

The start and stop notifications also give the client a window for when your scans and testing activities were taking place in case they need to run down any alerts.

Aside from formal communications, it is good to keep an open dialogue with your clients and build and strengthen the trusted advisor relationship. Did you discover an additional external subnet or subdomain? Check with the client to see if they’d like to add it to the scope. Did you discover a high-risk SQLi or RCE flaw on an external website? Stop testing and formally notify the client and see how they would like to proceed. A host seems down from scanning? It happens, and it’s best to be upfront about it than try to hide it. Got Domain Admin/Enterprise Admin? Give the client a heads up in case they see alerts and get nervous or so they can prepare their management for the pending report. Also, at this point, let them know that you will keep testing and looking for other paths but ask them if there is anything else they’d like you to focus on or servers/databases that should still be limited even with DA privileges that you can target.

You should discuss the importance of detailed notes and scanner logging/tool output. If your client asks you if you hit a specific host on X day, you should be able to, without a doubt, provide documented evidence of your exact activities. It stinks to get blamed for an outage, but it’s even worse if you get blamed for one and have zero concrete evidence to prove that it was not a result of your testing.

Keeping these communication tips in mind will go a long way towards building goodwill with your client and winning repeat business and even referrals. People will want to work with others who treat them well and work diligently and professionally, so this is your time to shine. With excellent technical skills and communication skills, you will be unstoppable.

Presenting Your Report - The Final Product

Once the report is ready, it needs to go through review before delivery. Once delivered, it is customary to provide the client with a report review meeting to either go over the entire report, the findings alone, or answer that they may have.

QA Process

A sloppy report will call into question everything about your assessment. If your report is a disorganized mess, is it even possible that you performed a thorough assessment? Ensure your report deliverable is a testament to your hard-earned knowledge and hard work on the assesssment and adequately reflects both. The client isn’t going to see most of what you did during the assessment.

The report is your highlight reel and is honestly what the client is paying for.

You could have executed the most complex attack chain in the history of attack chains, but if you can’t get it on paper in a way that someone else can understand, it may as well have never happened at all.

If possible, every report should undergo at least one round of QA by someone who isn’t the author. Some teams may also opt to break up the QA process into multiple steps. It will be up to you, your team, or your organization to choose the right approach that works for the size of your team. If you are just starting on your own and don’t have the luxury of having someone else review your report, it is strongly recommended walking away from it for a while or sleeping on it and reviewing it again at a minimum. Once you read through a document 45 times, you start overlooking things. This mini-reset can help you catch things you didn’t see after you had been staring at it for days.

It is good practice to include a QA checklist as part of your report template. This should consist of all the checks the author should make regarding content and formatting and anything else that you may have in your style guide. This list will likely grow over time as you and your team’s processes are refined, and you learn which mistakes people are most prone to making. Make sure that you check grammar, spelling, and formatting! A tool such as Grammarly or LanguageTool is excellent for this. Don’t send a sloppy report to QA because it may get kicked back to you to fix before the reviewer even looks at it, and it can be a costly waste of time for you and others.

If you have access to someone that can perform QA and you begin trying to implement a process, you may soon find that as the team grows and the number of reports being output increases, things can get difficult to track. At a basic level, a Google Sheet or some equivalent could be used to help make sure things don’t get lost, but if you have many more people and you have access to a tool like Jira, that could be a much more scalable solution. You’ll likely need a central place to store your reports so that other people can get to them to perform the QA process. There are many out there that should work.

Ideally, the person performing QA should not be responsible for making significant modifications to the report. If there are minor typos, phrasing, or formatting issues to address that can be done more quickly than sending the report back to the author to change, that’s likely fine. For missing or poorly illustrated evidence, missing findings, unusable executive summary content, etc., the author should bear the responsibility for getting that document into presentable condition.

You obviously want to be diligent about reviewing the changes made to your report so that you can stop making the same mistakes in subsequent reports. It’s absolutely a learning opportunity, so don’t squander it. If it’s something that happens across mutliple people, you may want to consider adding that item to your QA checklist to remind people to address those issues before sending reports to QA. There aren’t many better feelings in this career than when the day comes that a report you wrote gets through QA without any changes.

It may be considered strictly a formality, but it’s reasonably common to initially issue a “Draft” copy of the report to the client once the QA has been completed. Once the client has the draft report, they should be expected to review it and let you know whether they would like an opportunity to walk through the report with you to discuss modifications and ask questions. If any changes or updates need to be made to the report after this conversation, they can be made to the report and a “Final” version issued. The final report is often going to be identical to the draft report, but it will just say “FInal” instead of “Draft”. It may seem frivolous, but some auditors will only consider accepting a final report as an artifact, so it could be quite important to some clients.

Report Review Meeting

Once the report has been delivered, it’s fairly customary to give the client a week or so to review the report, gather their thoughts, and offer to have a call to review it with them to collect any feedback they have on your work. Usually, this call covers the technical finding details one by one and allows the client to ask questions about what you found and how you found it. These calls can be immensely helpful in improving your ability to present this type of data, so pay careful attention to the conversation. If you find yourself answering the same questions every time, that could indicate that you need to tweak your workflow or the information you provide to help answer those questions before the client asks them.

Once the report has been reviewed and accepted by both sides, it is customary to change the DRAFT designation to FINAL and deliver the final copy to the client. From here, you should archive all of your testing data per your company’s retention policies until a retest of remediation findings is performed at the very least.