Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 23 Next »

This page covers some common testing best practices and anti-patterns.

Best Practices

Use Clear Names

Tests should have names that clearly indicate the behavior and state being tested, and should make it easy to determine what failed without having to look at the test's implementation. For instance, catalogReturnsMetacardIdWhenIngestSucceeds or exceptionThrownWhenInvalidUserNameProvided.

JavaDoc comments should be used when capturing a test's state and behavior in the test method name would make it too long.

JUnit 5 @DisplayName annotation and Spock's descriptive method names can also help address those issues.

Test Positive and Negative Scenarios

Always remember to tests all positive and negative scenarios and ensure that the proper results (error code, exception, message, etc.) are returned.

Some error scenarios are extremely difficult to test using end-to-end or component tests, which makes testing all exception scenarios at the unit test level even more critical.

Keep Related Tests Together

Use Proper Assertions and Validations

When writing a test, always:

  • Assert that the expected value is returned
  • Assert that the object under test is in the expected state
  • Verify that the mocked dependencies were called/not called as expected
  • Verify that all the mocked dependencies that were called were called with the right parameters
  • Have just enough assertions and verifications to prove that the test passes, but no less

Assertions and verifications should be repeated in different tests only when they are required to verify the behavior being validated. In other words, separate tests should never assert or verify the same behavior.

This best practice applies mostly to unit and component tests. Since End to End tests tend to be more complex and time consuming, it sometimes makes sense to verify related behavior in a single tests. This should however be avoided where possible as doing so will make it more difficult to determine why a test failed.

Mock Dependencies

When writing unit or component tests, mock all dependencies that may fail. This allows the tests to guarantee that the unit or component under test behaves as excepted when one of its dependencies fails.

This is especially important in unit tests as these may be the only tests where some of those exception scenarios can be tested.

Simply dependencies (e.g., POJOs) or dependencies that are difficult to mock out (e.g., file system, static utility classes) can be used directly without being mocked, as long as doing so doesn't go against the basic testing best practices.

Clean Up After Yourself

Always make sure that tests clean up after themselves to ensure that a test failure doesn't impact other tests.

Use the test tool's fixture capabilities (i.e., cleanup/tear-down methods) to perform any required cleanup and reset the test class to a known state instead of using finally blocks.

Use Existing Test Tools and Frameworks

Write Re-Usable Test Code

Tests Before Changes

Before changing, fixing a bug or refactoring existing code, always make sure that tests exist and pass first.

If not, write unit test for the code that is about to be changed or refactored.

When dealing with legacy code that would be difficult to unit test, consider writing component tests instead. Those should be easier to write while still making it safe to refactor or change the code as needed.

Refactorings automatically performed by an IDE are usually safe and may not require adding tests first. This is especially true for simple structural refactorings that do not affect the logic or flow of the code, such as extract class, extract method, etc. If such refactorings make writing the tests easier, then applying them without existing tests can be considered.

Be Test-Driven

Brendan Hofmann - I'm not sure I agree with this one. A lot of development is iterative in nature, and TDD can make that more expensive. We should have best-practices beyond TDD as well.

Steve Lombardi (Deactivated) - This seems OK but I wouldn't want developers to loose the flexibility to work how they work best.

Comment:

  • Make it clear this is just a recommendation and doesn't always apply.
  • Add "when appropriate" or "for new code" in title or description?

Follow the Test-Driven Development Test Cycle.

First, come up with an initial list of end to end, component and/or unit tests that will be needed to show that the user story, requirement or improvement works as expected or issue has been fixed.

Then for each one:

  1. Add a test
  2. Run all tests and see if the new test fails
  3. Write the code
  4. Run the tests until they pass
  5. Refactor the code as needed
  6. Move on to the next test

Be Behavior-Driven

Test behaviors, not methods or lines of code.

Comment:

  • May need to clarify that "exceptional" behaviors can be included and how
  • Mention that test names should also be from a behavior perspective

Write each test following the Behavior-Driven Development structure, i.e., Given/When/Then. In other words make sure each test:

  • First provides a clear context in which it will be run (given)
  • Performs the operation to be tested (when)
  • Asserts that the expected outcomes have been met (then)

Smells

Tests Difficult to Name

A test that is difficult to name usually indicates that it is trying to verify multiple behaviors and should be broken up. It may also indicate that the class under test is doing too much and is breaking the Single Responsibility Principle and may need to be refactored.

Too Many Mocks

A test that requires many mocks usually indicates that the class under test is breaking the Single Responsibility Principle or the Law of Demeter and may need to be refactored.

Comments:

  • Consider making those component tests?
  • Not a hard and fast rule, varies

Dependencies Difficult to Mock

Classes that make it difficult to mock their dependencies usually need to be refactored to follow the Dependency Inversion Principle and use dependency injection. For cases where the class to be mocked is instantiated inside the class multiple times, consider creating and injecting a factory class or a Supplier, or use the Factory Method design pattern.

A Lot of Repeated Test Code

Creating Mocks Manually

Test Multiple Behaviors in a Single Test

A test should test one and only one behavior. Having assertions and/or verifications that validate different behaviors makes it difficult to know why a test has failed and doesn't provide a good insight into what is being tested.

Repeated Given, When or Then sections in a BDD test, or difficult to name tests are usually two good indicators that a test is doing too much.

Comment:

  • How do you write tests that build on top of each other?
  • Brendan Hofmann What is the right way to unit test sequential behaviors then? We should provide positive guidance to follow instead.

Anti-Patterns

Sleeps

Sleeps in tests should be avoided as they open the door to timing issues and race conditions, slow tests down and are a major cause of test flakiness.

Some options to avoid sleeps in tests include:

  • Replaced the sleep with an active wait loop (a.k.a., polling), i.e., wait until a condition has been met before moving on
  • Use external synchronization, i.e., use existing class notification mechanism to know it has reached a certain state before continuing
  • Refactor the code under test to eliminate concurrency
  • Use external dependency calls or side-effects as synchronization points in the tests
  • For unit or component tests, replace (using dependency injection) Executor objects with MoreExecutors.directExecutor() and ExecutorService objects with MoreExecutors.newDirectExecutorService() to make the code and test single-threaded

Undoing Setups in Specific Tests

Tests Need to be Run in a Specific Order

This usually indicates that some tests are not properly cleaning up after themselves or are not properly setting up their test prerequisites.


  • No labels