functional. Here is a short list of things that I came up with that the Cracker needs to be able to do:
- Given a byte 0x00 - 0xff the cracker should be able return a list of all possible parents
- Given a byte 0x00 - 0xff the cracker should be able to determine if the parents were mostly likely a char ^ char or a char ^ space
- Given a byte 0x00 - 0xff the cracker should be able to return a subset of parents that meet the char ^ char or char ^ space criteria
- Given two ASCII or Hex strings the cracker should return the proper XOR
- Given a String representing Hexadecimal the cracker should be able to return a list of composite bytes.
- Given a byte 0x00 - 0xff the cracker should be able to convert it to a binary string representation
I will not be writing a full Test Suite, but I highly recommend looking into the unittest module's documentation for more information on executing groups of tests in a better way https://docs.python.org/2.7/library/unittest.html#grouping-tests.
Instead what I opted to do was write a Test Case extension class TestOTPCrackerMethods(unittest.TestCase). Like most Unit Test frameworks unittest supports the concept of setup() and teardown() functions which can be used to initialize all of the globally required variables for tests. Usually you will want to at least add a setup() function that contains something similar to the following:
def setUp(self):
self.otpc = OTPReuseCracker()
So that you do not have to add an initializing statement to every test. Instead, on each test run, the code in setup will initialize a new copy of the Class Under Test. tearDown is less commonly used in Unit Tests. There are cases where you may want to explicitly destroy some objects you created during a test run though and that is what tearDown() is for. I find myself using Tear Downs in Integration Tests far more often. The difference between a Unit Test and an Integration test can be a little fuzzy sometimes so I use the rule of thumb "if it can fail because of a missing piece, and not only bad code: it is an Integration Test" Think of testing a function that opens a file and reads some int value from it and adds 1. A Unit Test would need to first assure that the system contained the required file and then it could assert that the function returns 1 more than the value in the file. However this dependence on the underlying system couples the function being tested to the system it is testing on, To make sure the test is accurate it first will need to create the required file with a known value. If this operation fails the test cannot pass, but the failure is not in anyway related to the actual function you are concerned with. This level of ambiguity is why I separate unit Tests from integration tests. If a unit Test fails it always means a failure in the underlying function.
There are some good general rules to follow when writing Unit Tests, and indeed doing Test Driven Development in general.
Test Names should be VERY descriptive. For example, I use the format functionBeingTested_inputType_expectedOutcome
This leads to long test names, sure. But anyone who reads the test failure knows exactly what function failed, what was sent to the function, and what the expectation was. This saves a huge amount of time trying to track down potential causes.
Tests should only Test one assertion at a time. If you have a bunch of assertions in one test, it will be hard to come up with a name like the above and it will not let you know if other failures were going to occur. Sure, you could program around this, but then you have logic in you tests and you'd need tests for your tests. Not a pretty spiral I assure you.
Tests Shouldn't use Magic Variables. Magic variables are variables who's values come from some unknown source or reasoning
Take for example the test below:
testSomeFunction_inputString_returnsInt(self):
in = 'cat'
expected = 3 # magic var, why would input cat return 3?
assertEquals(obj.someFunction(), expected)
This one is sort of gray in my book. The obvious example above is not often found in the wild. Good code practices say you should have descriptive function names for self documenting code. If we chnage someFunction() to countChrs() then in context the 3 is no longer REALLY a magic variable. Anyway if you find yourself questioning why a static value was chosen for a value, just from reading the code, then it may be best to adjust the code or test to be more clear.
Always start by writing a Failing Test. This is the basic rule behind Test Driven Development. Every change springs from not meeting some test. So every change starts by defining how you plan to test that change. Once you have a failing test that represents the functionality you want to ensure, you can go make a change to the underlying system to try and pass the test.You continue to add small tests and make small changes incrementally until all functionality is represented in tests and all tests passed.
Make the simplest possible change that will satisfy the Failing Tests. This should seem obvious since you want to keep the complexity of a system to the minimum, but there are some interesting cases where this rule forces you to write better tests.
Let's look at a concrete example from the OPTReuseCracker
def test_xorHexStr_goodhexStrValues_returnsXORdByte(self):
a = '0x0f' # binary 00001111
b = '0x00' # binary 00000000
# -----------------
expected = "0x0f" # 00001111
actual = self.otpc.xor_hexstr(a,b)
self.assertEqual(actual, expected)
You should be able to tell just from the name of the function, what is being tested, how, and what I expect the outcome to be. Furthermore I explicitly defined WHY I expect the value I do. Finally, I assert that what the Function returns indeed matches my expected output. Does this prove that the function underneath would return a proper XOR for any bytes? Well, no. Running this test the first time I would get the Failure "OTPReuseCracker has no method xor_hexstr". okay, simplest change to fix the failure: Go add a method named xor_hexstr(a,b) to the code. Now, running the test will output a failure. Sweet. Progress. The failure is now "Assertion failed None does not equal "0x0f". Now the simplest change is to go back to the underlying code and make xor_hexstr(a,b) return "0x0f",
def xor_hexstr(self, a, b):
return '0x0f'
but that doesn't, in fact, do what we want. The Test is not general enough to find this oversight, though. Instead you may come up with a test over a subset of bytes, like {"0xff":"0xd2","0x08":"0x3d","0x2a":"0x7b"} it would be reasonable say that it would now be easier to write a generalized hex string xor'er.
def test_xorHexStr_goodvalues_returnsXORdByte(self):
byte_pairs = {"0xff":"0xd2","0x08":"0x3d","0x2a":"0x7b"}
for a in byte_pairs:
b = byte_pairs[a]
expected = '0x{:02x}'.format(int(a,16)^int(b,16))
actual = self.otpc.xor_hexstr(a,b)
self.assertEqual(actual, expected)
When I went to define the expected output, I came up with the concrete implementation and expectation of the xor_hexstr() function. Now I can go back to the OTPReuseCracker class and update the code so that it returns the output as expected
def xor_hexstr(self, a, b):
result = int(a, 16) ^ int(b, 16) # convert to int and xor them
hex_str = '{:02x}'.format(result) # convert back to hexadecimal
return "0x%s" % hex_str
Test For Negative Scenarios. This one is one I see ignore all too often. It is easy(ish) to write a bunch of tests that describe what SHOULD happen. It is considerably harder it seems to think of tests that assure you are properly handling what SHOULD NOT happen. For instance, in the cracker class there is a function xor_ascii_string(a,b). I would like to ensure that it does not allow someone to XOR two different length strings. If they attempt to do so, I want to raise an exception. unittest allows you to do this with a little bit of structured calling. Try/Catch to the rescue. Take the test below:
def test_xorASCIIStr_inputLengthMismatch_raisesValueError(self):
a = 'dog' # pretend m1
b = 'cats' # pretend m2 longer than m1
with self.assertRaises(ValueError):
try:
self.otpc.xor_asciistr(a,b)
except ValueError as e:
# do some action with e
self.assertEqual(e.args,
('String length mismatch',))
raise
For more details on learning good Unit Testing practices check out Roy Osherove's videos (http://osherove.com/videos/)
No comments:
Post a Comment