Why Tool-Restricted Exams (Sometimes) Matter
As the author of the Red Team Ops course and certification, I often get asked why the tools available in the exam are restricted to those provided in the pre-configured environment. The answer is a little involved to explain fully on Twitter or Discord, so this post is my attempt at providing a more complete answer.
I first want to touch on the OSCP and PNPT examinations, because they’re good points of reference. They’re both popular penetration testing certifications but have the complete opposite philosophy regarding tooling in their exams. I’ll preface everything I say hereon in with: I don’t hold PNPT and my OSCP is ~10 years old. These are my view from an outside perspective.
Offensive Security are well known for imposing various restrictions in their exam (in fact, they publish the OSCP exam guide). In a nut-shell they ban all commercial, autopwn, and mass-scanning tools, and the extent to which one can leverage the Metasploit Framework. Unfortunately for them, this can get them into some rather hot PR waters.
Alternatively, TCM actively use this as a selling point for their PNPT (reading between the lines, one could reasonably conclude that half of their advertising materials are direct jabs at the OSCP).
The primary angle that TCM appear take is that a tool free-for-all provides more real-life realism. Presumably because if you’re a penetration tester, you’re not typically told what tools you can or can’t use on an engagement.
What am I Being Tested on?
When looking at an exam, one must ask: am I being tested in my ability to get from A to B; or am I being asked to demonstrate a particular set of skills or knowledge along the way?
Looking at the PEN-200 course syllabus and competencies, you’ll see there’s a large emphasis on manual enumeration and exploitation. Naturally, the OSCP exam is designed around the student having to demonstrate that they can apply those skills. Logically speaking, allowing the use of automated tools when the point is to perform the steps manually, is pure folly. The tool restriction is not intended to apply artificial difficulty – it’s to keep you on track with what you’re meant to be proving.
OffSec’s reasoning behind this is a) you should learn how to do something manually before you automate it; and b) in the event an automated tool fails, you’ll need to know how to do it manually anyway. I’m guessing that the reason the PNPT exam does not have the same restrictions, is simply because it doesn’t have the same focus on doing things manually. And that’s fine. The two certifications just have a different set of values.
The Red Team Ops course is all about adversary simulation and performing abuses in a more stealth-oriented way compared to what you would do in a typical penetration test. So when it comes to the RTO exam, the manner in which you get from A to B actually matters a lot, and for two main reasons.
First – stealth is obviously an important factor, which may limit your ability to use particular tools that are known to be loud (BloodHound being a notable example). Second – simulating a known adversary funnels you into using pre-defined tools and techniques based on a threat profile. Therefore, working within an arbitrary set of parameters/restraints/limitations (or whatever you want to call it) is part and parcel of this type of engagement.
And just like the OSCP, it’s not logical to allow the use of some toolsets when they behave in a way that contradicts the values being instilled in the course, and when you’re assessing a student’s ability to operate a specific way.
The idea that I’ve tried to get across in this post is that one “style” of exam is not better than the other, and a vendor’s choice to place restrictions on their exam depends on multiple factors. I certainly don’t feel (although I’m obviously biased) that tool-restricted exams should automatically be seen as an evil, if there’s a decent reason for doing so.