Possible improvement for SUnit: yellow by default

This is just a small idea, but I’d like to share and discuss it: A test method that contains no assertions should turn yellow by default.

Why on earth?

Well, you know, in theory, everybody starts writing tests with assertions. And as we all know, we all write tests first and all that stuff.

In practice, the grass is a little less green and I admit freely that if I do test-first (or at least test-together-eith-evolving-code), I often start by writing a test method that simply contains a method send to the real method, just to see if it runs at all, or if I have some very stupid logic error in there. Sometimes, I even don’t really know what I’d expect as a result from the error, I just work myself towards the logic of looping over a stream or whatever. The approach of using a TestCase for the first experimental runs of an eveolving method is a good thing for me, because I can run my code and collect all that’s necessary to feed into it for its runs as setup code in the TestCase from the very start. I’ll have to implement that setup later anyways, so this saves me time and makes writing tests a very natural experience (just swap out the workspace for a test case).

Okay, so what?

So I start writing the method (most of the times, I must admit, I start with writing a draft method before I write the test method, and then start with a test driver for it to see if my idea for the method’s logic is any good) and then write a test method that simply calls my method. That’s all good and well, and I like the way this helps me write code.

BUT far too often, I jump away from this test method and work on something else, because I like the results of my method and since I have a test that’s green, all is perfect.

But wait: it’s green, but this doesn’t mean it meets any requierements, because I’ve not formulated any. Remember, my testΒ  method has no assert: deny: or anything like that, it simply runs my method. So it doesn’t really test anything other than whether the method crashes, because that would turn the test result to red.

It has happened to me more than once now that I was proud to have green tests and having done everything so much better than most developers ever do, just to find out that even though I had a bunch of green tests, the end result simply was wrong, because the green tests simply were green because they didn’t test anything.

But hey, if I did it right, this would not be a problem!

That’s true. If I always started by formulating a (probably useless or sure-to-fail) assertion around my call to the method, the test would fail from the very start and I couldn’t forget to turn the test green before shipping the code.

But there are several major problems with this approach:

  1. I am bad in breaking habits – good or bad
  2. Having to think about the result of a method before I even know how the logic of the method will be, and even before I know if it’s going to be broken up into several methods in just a few methods, interrupts my train of thought. But if this assertion should always fail anyways, why can’t SUnit fail for me (see also next argument) πŸ˜‰
  3. Isn’t a tool made for making life easier? So shouldn’t it check whether it’s checked anything, because if it didn’t it simply lies to me when it turns a test green!
  4. I guess I would write an assertion to the end of each test method that makes no sense, just to turn the test yellow. This would only help me develop a new bad habit instead of support my in my good intentions. And: SUnit can do that for me and thus enable me to concentrate on assertions that make sense while it reminds me I still need to write an assertion.

So what am I asking for ?

In fact, I am asking for a default state for a test to be yellow until a first assertion is made (or an error occurs, of course). Because green is a very dangerous color for a test method that didn’t check anything. It gives me the illusion that I am on the right track, and this is dangerous and can be expensive

So what do others think? Should we change SUnit or am I the only one with this problem?


7 thoughts on “Possible improvement for SUnit: yellow by default

  1. I sometimes purposely write tests without any assertions. For example, I often want to write a test that serves as an example to users of how to create an instance. I want to avoid the silly “self shouldnt: [ MyObject new ] raise: Error” thing, so I just write “MyObject new”. It’s more for communication purposes that to test correctness. Although, in the case of a totally empty test, there should be some indication, but…

    What you’re really bumping up against is Smalltalk’s lack of a skipped/pending test concept, like Ruby’s rspec. I really want that for exactly the reason that you’re describing. Sometimes a test is not failing, but not passing either! We need a new domain concept to cover that πŸ™‚


    1. Sean,

      first of all, thank you very much for commenting. I’m interested to hear as many opinions as possible.

      >I sometimes purposely write tests without any assertions.

      Tststs. While I understand your argument, I must of course raise my finger and say: you are misusing SUnit much more than I am πŸ˜‰
      Funnily, when I teach SUnit to people, I try to make a very clear point by saying that Unit tests are the best kind of documentation that you can ever get, because it is alive and current (well, you know, it has the best chances to be current). But what you describe is not what I have in mind πŸ˜‰ OTOH, a misused test method is better than no documentation… So who am I to point my finger at you πŸ˜‰

      I tend to strongly disagree on the point of me asking for any new concept or kind of test. All that stuff makes unit testing as complicated and hard to get to and discourages its use, especially for people new to unit testing. Just like tests that are expected to fail or whatever. That stuff all opens a back door for arguments like: you know, I wrote this test, knowing it’s going to fail, and I never found the time to … (you know, that kind of stuff). SUnit’s brutality is one of its (if not its only) strength: an assertion isn’t met, no matter why. Basta!
      All I ask for is a little extension (in fact, I’d call it a correction) on the idea of what a passed test means: a test can only be green if it tested something and all of the assertions were met. Because a test method that hasn’t tested anything should not be regarded as green. Be the non-testingness by purpose or accident…

      Such an extension would be an additional safety net: The tests check themselves if they are any good, so to some extend they even test themselves, at least to the extent that such a simple rule as “did I test anything at all and is it really fair to give positive feedback?” is checkable by a framework. You can still write lots of useless tests, though and thus trick SUnit to say that your code is perfect, but this cannot be checked by software, at least not today.

      1. Ha ha. Our disagreement is more fundamental than I thought. I challenge the assumption that a test “has to” have an assertion to be valid. In fact, I think that’s one of the important points that BDD makes. When people are focused on testing, they do/say weird things like that πŸ™‚ My tests exist to specify behavior, not to slavishly follow rules made up by programmers. That you can create an instance of my class by sending #fromString: is behavior. And it will fail (with an error) if it is removed from the API. More so, it’s a contract with the user that “this [and this]… is a way to create an instance”. I disagree that such a thing should be forbidden, especially in Smalltalk, where the fundamental paradigm is to trust the programmer (no private methods, unlimited reflection, etc.)

        1. I don’t really think we disagree so much. At least you say my use of SUnit isn’t so far off the limits, that makes you a nice guy πŸ˜‰
          But honestly, I am not saying a test without assertions is invalid. It’s just not green πŸ˜‰ It’s probably as good as if it wasn’t run at all, because it doesn’t prove anything. It’s “result” is irrelevant to the test suite, even though its content can be useful for other purposes than testing, like giving an example for the usage of an API or building up an environment for running some piece of code. So it may be far from invalid, but it’s not in a position to signal green.
          Nevertheless it is important that it is visible to a tester that there was a method that claims to test something (just by the fact that it is named testXXX) but doesn’t.

          For documentation purposes, you could also write a test driver that is not named testXXX (in fact this is a weak argument to make my case, because I could also start my exploratory development work with such a method) and therefor is not an automatic candidate for a TestSuite (you can, however, add it to a suite manually and run it as well). To even destroy my argument a little more, this is in fact nothing else than introducing another kind of test and therefor a new concept, just not a formal one. But if my project decides to use it, this needs to be learned by new project team members and therefor is exactly what I’d like to avoid: I’d like to keep SUnit lean and mean.

          The following in your comment is very similar to the scenario I am talking about:

          > My tests exist to specify behavior, not to slavishly follow rules made up by programmers.

          In my case they exist as a starting point to experiment with an idea for a method implementation, and it is still very open as to what the result of a method might actually be. I just wanna iterate over some list or walk a tree and see if I can use a collect: to build a certain kind of result, which may become the return value or be passed on to another method right away, I am not sure yet. I don’t want to or cannot specify an expected return value, because I don’t know yet.

          This is not a problem per se, I am convinced it is better to start such a thing off with a TestCase rather than a workspace, because once I can specify what a method returns and which exceptions it may be throw, I can simply wrap my message sends into assertions.

          So I still think the behaviour I suggest is only a little additional safety net for the cases where (for whatever reason) I forget to go that last step. And I thought my idea that, btw, a test without a test should have a positive result, was a clever one.

          Let me give you an example: If I was a project manager and asked a programmer on my team to test something, I’d like to be sure that he/she really goes back to his/her desk and tests something and only give me positive feedback if tests were conducted. The way SUnit works right now is that he spends his day reading the last 456 strips on dilbert.com and before he goes back home he comes to my cubicle and says: job done, all is good, have a nice evening!

  2. I have written a couple of thousand test methods and I don’t think there is or was one that had no real tests in it. Except for the setup stuff, of course. So I guess you have just detected you personal habit, which probably isn’t very good.

    1. Thanks Peter πŸ˜‰

      In fact even if I’m going to find out that my use of SUnit is plain wrong I’m learning something. And that sure is a good thing πŸ˜‰
      Still I think an empty test method should never be green. There’s no check, so there’s no way this should indicate anything is proven.
      But do you think that I should write a completely useless assertion just to make SUnit turn things yellow for me? This still requires me to do something to make SUnit give me the results I want/need…


    2. Just to clarify: it is never my intention to leave the test methods without assertions. It is just happening, and I cannot really tell why. Maybe it’s because the phone rang and I continued somewhere else, or because I typically dive into more detailed tests once I break up the evolvong methods by Refactoring and the concentrate on the more detailed tests. And, before anybody thinks I am a completely chaotic looser, it’s not that I have hundreds of green tests that are withut any assertions, but it happens from time to time, and sometimes I find out months after I wrote the tests πŸ˜‰
      And I also see this happening in tests of other developers.
      I attribute this to the exploratory stayle of writing code that I sometimes use: develop a raw idea of how things could possibly work, start coding, start writing a testcase to feed data into it and later add assertions. I usually have very good results with this style, and typically, my tests get assertions, once the raw concept for an algorhithm proves to be feasible. But obviously, that’s not always the case.

Comments are closed.