At the beginning of the test file, test class, or even interspersed in the test method itself I do a
start_logging STDERRwhen I want logging to begin. At that point any messages that are logged will be written while the test proceeds. When I'm no longer interested, I can do a
finish_loggingto stop the logging output.
I started to use it consistently in my test cases, putting finish_logging in the teardown method, and then at the top of the test case logging to STDERR if I wanted output or to nil if I didn't. Coupled with the logging in the classes I'm writing (which are subjected to thresholding and directed to files in normal operation) this makes quick work of determining where things have gone awry when tests fail in subtle ways.
Fail in subtle ways? Isn't that an indicator that more tests need to be written? Yes, of course! But in the real world, not all the tests that should get written actually do get written, despite our good intentions. The logging is especially useful in these situations to help identify untested conditions. If I turn on logging output, and then look at the code and a few lines of logged values, I can figure out what's going on and whip together a test a lot more quickly than just wondering why the heck my code isn't doing what I though it should.