Yet another Mock Object “framework” for VA Smalltalk

I’m currently preparing a training/workshop for a new customer where we want to find ways and techniques to improve their use of unit tests in a legacy Smalltalk project. They’ve been using VA Smalltalk for a couple of years successfully and provide a very respected family of products in their field. In fact, they say they are a market leader in the very business. After a few years of trying to ignore the fact they successfully use Smalltalk in their product, they’ve come back to a point where they accept it as a useful technology that has its place and benefits – even if it is not considered mainstream.

So they – like many other Smalltalk projects – are faced with the fact that they missed the train for unit tests and code improvements in their Smalltalk pool while most other teams adopted these techniques years ago. So now they need to find a way to get their product under test without breaking it. Not that this can’t be done, it’s just a question of where to start and how to get all team members into the same boat while accepting that there is no chance to get a significant percentage of test coverage any time soon.

But back to what I wanted to write about. I am going through notes and code snippets of projects I’ve worked on to find useful stuff that we can use as starting points or discussion base for their specific way to test land. I stumbled upon a little Mock Object implementation that I had implemented back when I was young, so much younger than today… (wait, isn’t that some line from an old Beatles song, that’s even older than that?).

It simply consists of one class that can be configured to answer messages with defined results and interview it how it went after it was used. The whole thing was inspired by an old blog post by Sean Malloy and is really a neat, tiny and feature-poor simplest thing that could possibly work. Nevertheless, I couldn’t find a ready-made Mock Object implementation for VAST on G, so I thought it probably won’t hurt if I uploaded mine.

So there it is on VASTGoodies, ready for you to explore, use, extend and post a better version back to VASTGoodies. Feel free to like it or port another one to VA ST, there are several better ones available for VisualWorks, Squeak and maybe more.

There really isn’t much to say about the tool, other than that it’s easy to setup and use, you can always take a loot at the TestCase in the MiniSMockTests Application that comes with MiniSMock.

Here’s a mini-tutorial for MiniSMock:

You set up a Mock Object like this:

mock := MockObject new.
mock answer: #test with: [Date today].
mock answer: #sayHelloTo: 
  with: [:aPerson| Transcript show: 'Hello, ', aPerson asString;
    cr. aPerson].
mock answer: #sayHelloTo:and:
with: [:aPerson1 :aPerson2| Transcript show: 'Hello, ',
   aPerson1 asString, ' and ', aPerson2 asString; cr. aPerson2].

and you can then send messages to it. Just like this:

mock sayHelloTo: 'Joachim'.

In a Testcase you can then ask a MockObjects a few questions:

mock receivedMessage: #sayHelloTo:.
mock receivedMessage: #sayHelloTo: withArguments: #('Joachim').

And that’s all folks.
But don’t underestimate the power of Mock Objects when it comes to introducing unit tests in a legacy project!

SmalltalkInspect Episode 12 in english: Continuous Integration in VA Smalltalk

We’ve just released our episode 12 of the Smalltalk Inspect Podcast. This is our first attempt at doing an episode in english and it’s an extremely interesting chat with Thomas Koschate who shares some of his knowledge in the field of Continuous Integration in a VA Smalltalk project.

If you’ve wondered whether you can use VA Smalltalk in conjunction with CI servers like Hudson, Jenkins or Cruise Control, you should definitely listen to this episode. Once again the episode is a bit longer than we’d love them to be, but there was nothing we could cut out without stealing important and interesting information from our listeners…

SUnit extension uploaded to

I’ve just uploaded my little extension to the SUnit Framework to VASTGoodies. If you’d like to try it, I suggest loading the map z.ST: SUnit Testing.

What does it do?

It simply keeps the text that were defined in test assertions like assert:description: stored in an individual TestCase when the test fails, or the name of an exception thrown when a TestCase ended in an error. This is achieved with an additional instance variable in TestCase and a tiny change to TestCase>>performTest.

What is that good for?

One thing that I’ve been doing for quite a while now was helping Smalltalk Legacy projects to get back to the speed and agility that Smalltalk enables, but these projects never reached due to their corporate state of being legacy and about to be switched off any time soon now. Knowledge and Motivation was lost to other projects and therefor the code quality and project techniques are sometimes in a very sad state.
The most unwanted job in such project teams is the packager’s and code manager’s, because it takes a lot of time and is a mine field  in most projects: rotten code structures, undocumented procedures for packaging and deployment preparation and lack of knowledge, because the guy who used to do it left the team some years ago and never kept records of what to do and why.
In comes continuous integration and tools like hudson/jenkins that are excellent in performing automatic tasks like running tests, packaging, preparing a deployment directory and such.

A big shortcoming of SUnit in combination with hudson is that it doesn’t keep the description texts of assumptions anywhere. You can only see them if you debug a TestCase in the Test Browser or a workspace snippet, and only if you run that very test individually. Running a big TestSuite means losing these texts completely.
This is especially bad in combination with a tool that can keep a list of all failed tests for statistical reasons and more importantly for all team members to check every morning. It’s really annoying if you have to rerun a single t

So I wanted to add the descriptive Text to some kind of list that I can use to provide useful feedback.

So what is different now with failureTexts?

You can now run a TestSuite and inspect the TestResult. TestResult keeps a list of passes, failures and errors, which are the TestCases themselves (an instance of TestCase with the name of the individual test method it ran in the variable testSelector). Now a TestCase has an additional variable named failureText which holds a descriptive text of what failed or error’d (If you used assert:description:).

So if you write an assertion like

self assert: (1+1=3) description: ‘Smalltalk knows that 1+1 is not 3’.

You will find the TestCase in the failures list of the TestResult, and its failureText variable will contain the text ‘Smalltalk knows that 1+1 is not 3’.

That’s all folks. A little change that can help a lot!

What does it help?

In our projects we have a little exporter tool for SUnit test result in a jUnit compatible XML file. This can be imported by hudson and kept as one of many statistics of a build. So now we have a list of test results that each team member can browse each morning and see if their tests failed and what exactly failed in it.

Can I use it in my dialect?

I haven’t tried yet, but the code change is very small and I suspect it is completely portable. One of the tests in SUnit Testing references an exception class by name (ZeroDivide) and that may have to change in other dialects, but other than that I see no reason why it shouldn’t be portable. I’m happy to provide file out instead of a .dat to anybody who’d like to try the change in their favorite smalltalk. I’d also be happy to hear from you if you tried it and what you think of it.