[WEB SECURITY] FW: What's the Difference; PEN Testing and Black Box Testing?

Arian J. Evans arian.evans at anachronic.com
Sat May 10 15:17:03 EDT 2008

Susan -- The terms "Black Box" and "White Box" and "Glass Box"
testing are formally defined software testing notions that have been
around for some 30 or 40 years. Despite the ambiguity in their use
in the information security community...,

When this discussion crops up, you usually read answers ranging
from "source code vs. interface analysis" to "zero knowledge vs.
full knowledge" testing, and none of them especially correct. I am
as guilty as any of bandying-about the terms loosely.

I've defined the terms for you below, and provided anecdotal
example, and then references for further reading. Google has
ample reference to the vast Software Testing/Assurance
literature from NIST, NASA, and many others:

1. Vuln Test
2. Pen Test
3. Black Box Test
4. White Box Test


1. "Vuln test" aka vulnerability testing == subset of Black Box
testing. Most commonly means to locate the artifacts of
exploitable conditions & behaviors in software. False positive
prone without human analysis, due to the fact the artifacts
are often behavioral inference, and may not in fact indicate
a security defect at all.

2. "Pen test" aka penetration testing == another small subset of
Black Box testing. This testing is intended to locate and validate
exploitable conditions & behaviors by "penetrating" them upon
location. This improving the signal-to-noise ratio over "vuln testing"
by "verifying" the state of exploitability.

"Vuln testing" has largely replaced "pen testing" in the network
world and is starting to in the software security world. The two
are used interchangeable by many today. (e.g.- many consultancies
and VARs sell a Qualys network-security-scan as a "pen test")

3. Black box testing is the act of verifying that a software
specification, or a specification's intended behaviors, manifest
themselves in the software without reference to the internal
workings (implementation) of the code to verify the spec.

Note that by definition this is going to be inference-based
testing, primarily focused on validating the outcomes of
actions & their artifacts (e.g. - Rob's Report ran and you
have the correct report in hand).

Black Box testing is also very strong at identifying errors
of omission and unintended/unforseen emergent behaviors.

* Errors of omissions = the application failed to strongly
authenticate Rob's Report; while this could be due to design
or implementation mistake; Black Box testing merely tells
you that there is an omission of authentication.

* Emergent Behaviors = two pieces of code put together,
one with a limited spec for strong data typing, and the other
with weak handling of output, result in a new set of behaviors
that fail to meet specification, though each unit of code
individually meets it's own specification.

Emergent behaviors often wind up exploitable (e.g XSS,
Content Spoofing, and HTTP Control Character injection)

4. White Box testing is the act of verifying the internal workings
of a piece of software with reference to structure and/or down
to the algorithmic level. There are many different types of testing
that fall under "White Box" testing. (e.g "structural verification")

White Box testing is primarily focused on verifying that the
design structure follows an expected or ideal structure, and
that the implemented algorithms perform the units of work
that they are intended too. White Box testing is very strong
at identifying errors of commission.

* Errors of comission = the application failed to strongly
authenticate Rob's Report due to an explicit lack of design
specification to authenticate it properly. Or perhaps the
design spec is there, but there was an implementation
failure to call the central auth component structure properly
before accepting run-time parameters for Rob's Report.

Either way, someone committed an error: they failed to
rec-n-spec properly, or they failed to implement the
specification properly.

Note an additional problem in White Box testing is that,
traditionally, few security qualities are explicitly defined
in an application's specification. There is no easy meta-
data to extract this from. They are often implied.


Here's an informal way to think about the definitions that
may help out if the definitions seem abstract:

Black Box: Does my application require users to log on
and allow them to run their reports, and *only* their reports?

White Box: Does the procedure for Report748 properly
call the authentication module, verify authentication,
and then verify authorization of the caller to utilize the
submitted parameters for this query, and also verify
authorization to access the specific tables for the
data requested, and then does it feed this back out
to a view in proper structure?

(My White Box example is a poor one. In a modern
web application, all those questions are unlikely to
be answered in one DB procedure)

In testing terms -- to answer those questions:

In Black Box, I am going to go log onto the application
and see if I can do what the specification says I can
& should, and then if we are measuring the quality
of security, I will try to subvert things syntactically
and semantically.

In White Box I am going to look at the design first,
to see how authentication and authorization works,
then go look at the procedure to see if it invokes
the auth module correctly, then verify the other little
bits of the procedure algorithmically.

Each one tells a part of the story.

In Black Box, I may find this whole process
vulnerable to SQL injection but not know why.
(It could be that the above procedure is entirely
safe, but it feeds data to a secondary procedure
that is unsafe and results in SQL injection)

Long and short -- I identify that type-safety
and the enforcement of a data/function boundary
(parametrization in SQL) has been *omitted*.

In White Box, I may miss the above SQL
injection if my scope is limited to just the above
explicit procedure that is safe.

Or, alternately, I might find a very obscure SQL
injection due to noticing an explicit mistake that
would be very hard to find Black Box: in this case
a piece of user-supplied data is used dynamically
in a trigger wired up to a table called by the query
in question above

This trigger is weak to Blind SQL Injection, but only
exploitable without behavioral inferrence, and hence
very hard to find from a Black Box perspective.

In this case I would verify that someone had
*comitted* an error, perhaps several. Perhaps
the design specifies to always use parameterized
SQL, or never to use triggers or never to use
user-supplied data in triggers.

Or perhaps this was a syntactical error in the
implementation of the trigger (developer mis-
implemented the parameters). etc.

Hope that Helps.

Here are some good references for you:





On Fri, May 9, 2008 at 5:13 PM, Susan Smoter <spire20707 at verizon.net> wrote:
> I've been on this list for some time and I find it very helpful.  Now I'd
> like some help.  I have seen the terms PEN Testing and Black Box Testing
> used interchangably, but I think they are or can be different types of
> tests.  Seems that black box tools be used by developers to eliminate coding
> issues and to validate false positives from white box/static testing, while
> PEN testing would only attempt to "break and enter" without necessiary
> providing coders with info about fixing the identified vulnerabilities.  If
> I've got this correct, then I'd like to find a better set of terminologies
> to use to differentiate between security testing while in the SDLC phases
> and those done in preparation for application deployment.
> Thanks for some clarification – I'm working on establishing Application
> Vulnerability Management and am having difficulty getting everyone on the
> same page due to overlapping semantics.
> Susan

Arian J. Evans.

I spend most of my money on motorcycles, mistresses, and martinis. The
rest of it I squander.

Join us on IRC: irc.freenode.net #webappsec

Have a question? Search The Web Security Mailing List Archives: 

Subscribe via RSS: 
http://www.webappsec.org/rss/websecurity.rss [RSS Feed]

Join WASC on LinkedIn

More information about the websecurity mailing list