- One of the main arguments for using TDD is that it encourages you to improve the design of your code. Writing the test for a software component that you have designed, but have yet to implement gets you to think about and question the purpose and nature of your design. E.g., if when writing a unit test for a given class you discover that the test code gets way to complex or convoluted, then that is probably an indication that the class being tested has too broad responsibility and should be refactored into smaller, simpler units. I had always thought of TDD as more of a means to ensure you build a comprehensive test suite and as a result have fewer bugs in your software, but had not given much thought to the fact that it is in essence a way to get you to improve your software design; hence making your code easier to understand, use, and maintain.
- We did some exercises using the JMock framework, that Nat Pryce co-wrote. JMock is a neat tool to help you mock out interfaces that your code interacts with and to validate that your code is using the interfaces as expected. JMock allows you to command the mock object of the interface to behave a certain way (e.g. return specific results) and set up expectations for how you are planning to call the interface (e.g. methodA will be called once and only once with arguments "ABC" and 99). These expectations are integrated with JUnit and if not fulfilled by the end of your test run then JUnit will fail the test.
- The following JMock Cheet Sheet page provides an overview of he JMock syntax.
- During the Q&A session with Nat he admitted that JMock was probably best suited for green-field projects (new systems) while frameworks like Mockito where better suited for brown-field projects (preexisting systems that you are trying to create tests for).
- We spent some time discussing Monitoring Events, which is the concept of having your system broadcast notifications (events) about your code execution. E.g. when an order is placed or when a user logs into your system a notification of the event is sent to a JMS Topic that interested parties can subscribe to.
- This is great for logging. A logger component can subscribe to the topic and log all events in the system. It can then allow you to filter out certain events or do things like group events by requestid and provide a holistic overview of a single user transaction (as opposed to having to grep through numerous log files on multiple servers to try piece together what happened when a given user transaction ran through the system, which may have spawned multiple threads in multiple JVMs).
- Monitoring Events are great for testing too. Imagine trying to assert that a call to a checkout service (to complete the purchase of a product) will result in a proper inventory reduction in asynchronous inventory system. If your test runs straight through and checks the inventory status as soon as it has completed the purchase, then the test will likely fail, since the inventory system hasn't had time to process its update inventory request. One might try to fix such a test by adding a sleep statement of say 10 seconds after the checkout call but before the inventory is checked (which still might fail if the system is running slow). Or (which is a little better) one might implement a loop that every 1 second pulls the inventory system to see if the update has been received (succeed fast). In both cases we are polluting our tests with sleep statements and lengthening the time to feedback, when we run a suite of tests. A better way would be to have the test subscribe to the Monitoring Event (topic) and complete (succeed) as soon as it has received an inventory update notification.
- You can also use Monitoring Events to build a support tool for your system. E.g. the tool could send an email or SMS text message to a support person when a certain event is received (OutOfMemoryError, External service not responding, etc) or a when certain number of events have been received over a given time frame.
- Miscellaneous tips on testing and coding
- If you are ever testing code that depends on the system clock (e.g. at noon every day the system is supposed to execute some function) then a neat trick to make that code more testable is to refactor out the dependency on the system clock. E.g. instead of your class making a direct call to say System.currentTimeInMillis(), have your constructor take in a generic Clock object (or define a class variable and use dependency injection to inject a Clock implementation). Then you can have a SystemClock implementation that simply uses the current system clock, where as during your test run, the class under test is initialized with a FakeClock implementation that hardcodes the time to 12 PM)
- In your system tests, which load up and work with data in a database, have your JUnit setUp method clear out the database, rather than the tearDown method. This way you have actual data to look at if a test fails (rather than a clean swiped database).
- Use Simplicators to simplify communication with 3rd party APIs. That is, a facade that maps the 3rd party interface and artifacts over to your domain model or something that makes sense in your system. This way you can more easily test your own code (by using mock Simplicators that don't have a dependency on a 3rd party system) and likewise your code is more easily maintained (e.g. a type change or renaming of a field in some 3rd party XML response may require you to update your Simplicator implementation, but might have no impact on your business code if the response has already been mapped over to your domain model.
- Have separate tests that test your production setup. Basically tests that you can run in production to verify a deployment. Don't deploy your unit/system tests into production. Avoid the painful lesson that GitHub recently experienced where a test run in production cleared out their entire production database!
- When programming, don't have your methods return null! Null checks pollute you code and make it harder to debug. Return empty objects instead.
Blog about software development, agile practices, software use, and the latest happenings in the wide world of technology
Wednesday, November 17, 2010
Notes on Test Driven Development and Testing
This week I attended the Agilis 2010 conference, including a 2 day course on Test Driven Development by Nat Pryce. Below are a few notes from his class. Not necessarily the key teachings, but rather nuggets of information that I found interesting.
Subscribe to:
Posts (Atom)