Little Addition to SUnit: keep a failure text with errors and failures

I’ve posted about a problem I’m having with SUnit before: a TestResult does not hold the description Strings of failures and errors, so it is not easy to log unit test results with bare SUnit.

I need this for our Hudson Build Server integration on our project. While some developers have solved this problem by using their own Test runner implementation that writes a log file in jUnit Format right during the run of a TestSuite, I’d prefer to first run the tests and harvest all the Results from the TestResult afterwards. It may be a mere question of Taste but I think mixing test running code with XML generation has a certain smell.

For quite a while we simply ignored the problem and fed an XML file back to hudson which contained the text “The test failed – please rerun it in the SUnit Browser for details”. Which is of course exactly what you have to do when you want to fix that test or the code it tests: rerun it in the Test Browser and maybe debug or step it to see what’s wrong. But it felt strange. We had spent so much effort into automated build tools and hudson integration batch files and stuff just to see a bloody placeholder text in our statistics. It just feels like being a whining coward, not a redneck programmer. But what’s even worse is that the developers tend to not add descriptions to their assertions, because they’re useless anyways. So this little glitch in SUnit may lead to tests that tend to be cryptic. Part of my consulting job is to convince long-time Smalltalkers about the usefulness of unit testing in legacy projects. There’s not much that’s more depressing than to see how people finally use unit tests and suddenly realize they give up on descriptive assertions for that reason.

So there’s always been the idea of “fixing” SUnit to make it keep the description Strings for failed assertions. It turned out that it’s neither easy nor elegant, at least none of the ideas I tried.

The best and most natural place for an extension first seemed to be TestResult>>#sunitAnnounce:toResult:  which was introduced in SUnit 4. Here’s what the release notes for SUnit 4  say:

TestResult now double-dispatches via the exception (see #sunitAnnounce:toResult:). This makes it easier for users to plugin specific behaviour.

Unfortunately, there are two problems with this method:

  1. The exception thrown or failed assertion is not handed over to this method, so you can’t store it anywhere
  2. If you simply create and instance of a TestCase and send it #runCase to run it, there is no TestResult involved in running it

Another problem is that even if we mostly run our tests in the form of a TestSuite which always produces a TestResult, the collections #failures and #errors do not store some kind of test result object, but simply the TestCase instance that failed/error’d.

So the simplest thing that could possibly work is to add an instance variable to TestCase to hold a String (well, it can of course hold anything, we’re using Smalltalk after all). This variable would be set to an empty String just before a test is run and be filled with either an exception’s description if test execution results in an error, or with the description text of a failed assertion.

This gives us the ability to not change much about the way TestResult works, not break the SUnit Browser or any other SUnit-related tool, but keep the error texts.

It turns out this worked quite well for our project, and there was very little I needed to change in SUnit:

  1. Add an instance variable #failureDescription to TestAsserter (with getter/setter)
  2. Change TestCase performTest to:
      "Implemented and tested on VA Smalltalk 8.5, but should be portable"
          self failureDescription: ''.
          self perform: testSelector sunitAsSymbol]
            sunitOn: TestResult failure , TestResult error
            do: [:ex |
                self failureDescription: ex description.
                ex pass]

And that’s it. I can now iterate over the #failures or #errors of a TestResult and harvest their decriptions to do whatever I want with them. In my case I want to add them as XML tags into our jUnit-xml for the hudson server. So far we’ve not had any problems with this change.

I’m well aware that this is neither an extraordinarily clever nor galactically elegant solution, especially since it involves changes to the SUnit code base. But I like the effects I see so far. The feedback from the team is: “I can now tell right from the hudson log what my bug is in many cases”.

Before I am putting this change up to, I’d like to hear what other people think about this change. Is it going in the wrong direction? Too invasive into the concepts of SUnit? Would people rather put this kind of thing into the logging portion of SUnit (Be aware that logFailure* methods are only executed for failures, not errors!) or do they like this approach? Any better ideas out there? How did you solve this problem?

Smalltalk Devroom at FOSDEM 2012

Stephan Eggermont is going to set up a Smalltalk Developer Room at the FOSDEM conference 2012, February 4-5 in Brussels, Belgium.

The Smalltalk Devroom is scheduled for Sunday, February 5th.

What exactly, you may ask, is a Developer Room? In essence, it is a full-day mini-conference that is part of the FOSDEM conference, organized by a small group of people. Stephan was kind and courageous enough to apply for such a Developer Room and got a slot. He now needs both input and help from other Smalltalk enthusiasts and open source users and committers.

FOSDEM is the biggest free and non-commercial event organized by and for the community. Its goal is to provide Free and Open Source developers a place to meet. And it is free to attend, you don’t even have to register, just drop in and enjoy!

So if you want to learn about Smalltalk, get in touch with Smalltalkers and want to see what’s going on in the European open source community, from MySQL to JBOSS, FOSDEM is the place to be in early February.

Smalltalk Inspect Episode 11: Smalltalk in den Charts

We’ve just released Episode 11 of our german-spoken Smalltalk-related Podcast Smalltalk Inspect. This time we interviewed Christian Haider of Smalltalked Visuals about their product smallCharts that produces print-ready charts for newspapers right out of feeds like Bloomberg and Reuters, and about the pdf-creation and analysis framework pdf4smalltalk which is an important part of it. pdf4smalltalk is open sourced and available on Cincom’s public Store repository.

We cover a lot of other interesting stuff from unit testing, the burden of working alone in software development, automatic build processes, Smalltalk as a factor in entrepreneurial success and lots of other interesting things. So if you understand German and are interested in Smalltalk, feel free to download this or any of our other interesting episodes from our podspot page or – even better to never miss an episode – subscribe at iTunes.

Logitech K750 Mac: Erste Eindrücke

Seit ein paar Tagen besitze ich nun eine Logitech K750 Solartastatur für meinen Mac. Und ich muss sagen, ich bin sehr zufrieden damit. Das Schreibgefühl ist deutlich besser als auf der aktuellen Apple Wireless-Tastatur. Die Tasten federn ein wenig weicher und geben durch ihre Vertiefungen ein sicheres Tippgefühl.

Was der Apple Wireless komplett fehlt, ist der Zifferblock. Eine kabellose Tastatur mit Zifferblock gibt es von Apple bisher nicht, und Zubehörlösungen sind unterm Strich genauso teuer wie die Logitech. Zudem ist die Logitech noch nicht mit den Symbolen für Lion’s neue Funktionen Launchpad und Mission Control bedruckt, die Tasten jedoch funktionieren genauso. Da mir die Symbole auf der Tastatur ohnehin nicht viel sagen, ist das kein Beinbruch. Die Funktionstasten sind allesamt exakt gleich belegt wie am Original. Einzig die Option, die Funktionstasten standardmässig als Funktionstasten zu nutzen, und für die Zusatzfunktionen immer Fn drücken zu müssen, fehlt in den Systemeinstellungen für die Logitech-Tastatur. Wer aber ohnehin mit der Standardeinstellung arbeitet (also z.B. mit F12 lauter dreht, anstatt mit Fn+F12), wird das gar nicht merken.

Vom Design her passt die Tastatur in Silber sehr gut zu den aktuellen Macs, wenn sie auch weniger edel wirkt, wie das Original. Die Breite entspricht ziemlich exakt der kabelgebundenen Apple-Tastatur, in der Höhe ist sie um die Höhe der Solarpanels ein bisschen größer.

Einzig die Mechanik, um die Tastatur aufzubocken, ist ein bisschen sehr leicht geraten. Ich tippe offenbar ziemlich stark auf die Tasten, und bei einigen Anschlägen kommt die Tastatur ganz leicht ins vibrieren. Die Original-Apple-Tastatur ist zwar leichter und aus Aluminium, aber die Logitech aus Kunststoff ist sehr sehr verwindungssteif und macht insgesamt einen  stabilen Eindruck. Mit ihren Gummifüssen steht sie sicher auf der Tischoberfläche und rutscht nicht davon. Die Neigung ist etwas höher, als bei der Wireless, fühlt sich für mich jedoch sehr gut an.

Etwas, das mich bei deutschen Mac-Tastaturen insgesamt ärgert ist das Fehlen des Aufdrucks von Sonderzeichen wie dem Pipe-Symbol oder den geschweiften und eckigen Klammern. Leider macht auch Logitech hier keine Ausnahme: wer auf dem Mac entwickelt, muss eben auswendig wissen, wo diese Zeichen zu finden sind. Besonders ärgerlich ist das, da Apples Lieblingsprogrammiersprache Objektive-C ja gespickt mit eckigen und geschweiften Klammern ist, und auch als Smalltalker nutzt man die eckigen Klammern einige hundert Mal am Tag. Blindtipper haben da natürlich einen großen Vorteil, alle anderen müssen eben zu Blindtippern werden.

Ein Wort zur aktuellen Apple Wireless: ich war ziemlich enttäuscht vom Tippgefühl. Ich hatte zuvor die kabelgebundene Alu-Tastatur von Apple, die deutlich besser zu tippen war. Die Wireless-Tasten fühlen sich beim Tippen für mich ungenau und klapprig an.
Nun aber ist die Logitech meine Lieblingstastatur, und ich bin für meine Verhältnisse (4,5 -Finger-Suchsystem) sehr fix darauf (Wobei mir nachgesagt wird, ich sei schneller im Tippen als im Mitdenken).

Dass sie ganz ohne Batteriewechsel funktioniert und sich praktisch immer auflädt (bei Sonnen- und Kunstlicht), ist ein schöner Nebeneffekt. Ich kann noch nicht viel dazu sagen, wie lange der Akku wirklich hält, soll er doch laut Produktbeschreibung bis zu neunzig Tage durchhalten, aber bisher leuchtet die Anzeige der Prüftaste immer grün.

Ich hatte zunächst Zweifel, ob mir eine Tastatur zusagt, die am Mac nicht als Bluetooth-Gerät erkannt wird, sondern per USB-Empfänger angebunden werden muss. Zumal ich mir nicht sicher war, ob ich die Magic Mouse mögen würde (ich bin eigentlich begeisterter Mighty-Mouse-Bällchen user), und nicht klar war, ob ich für die Maus nicht auch einen USB-Anschluss brauchen würde. Inzwischen mache ich mir darüber keine Gedanken mehr: das winzige Ding steckt hinten im iMac drin und stört mich nicht weiter.

Der Anschluss war problemlos: USB-Dongle rein und am Bildschirm kommt ein Assistent, der einen Tastendruck auf dem Keyboard erbittet, um den Mac und die Tastatur zu koppeln. Das wars. Die Solar-App zum Anzeigen, ob gerade Licht auf die Solarzellen scheint, habe ich nicht installiert. Die Zeit habe ich mir gespart. Die Tastatur hat ja einen Prüftaster für den Ladezustand. Die Tastatur funktioniert sogar parallel zur verbundenen Wireless…

Also für mich war es ein sehr guter Kauf. Wenn sich nicht noch eine Überraschung einstellt 😉

[Update vom 19.01.2012: Ich bin nach wie vor sehr zufrieden mit der Tastatur, aber auch auf ein Problem gestossen, das aber lange nicht für jeden wirklich ein Problem darstellen muss: Unter VMware Fusion und Windows sind bei der deutschen Tastaturbelegung die Tasten “^” und “<” vertauscht. Beide sind im täglichen Leben vielleicht nicht direkt häufig benötigt, als Programmierer ist das aber durchaus ziemlich störend, zumal das “^”-Zeichen in Smalltalk sehr häufig benötigt wird. Mit Produkten wie VirtualBox oder Parallels habe ich das bisher nicht getestet, und werde es wohl mangels Zeit auch nicht so bald tun. Ansonsten: Top-Tastatur und vor allem das beste aus zwei Welten: drahtlos wie die Apple Wireless und vollständig wie die traditionelle Apple Tastatur]

Smalltalk Inspect Episode 10: VMware GemStone Smalltalk

This is only good news for you if you understand German, because we’ve just released our 10th episode of Smalltalk Inspect, a podcast I do together with Marten Feldtmann and Sebastian Heidbrink. A few weeks ago I’d have added the word “regular” here, but it didn’t really work out this time 😉

But back to the topic: this time we interviewed Norbert Hartl, who has been using VMware GemStone/S for quite a while now in several projects. He explains some of the major differences between classical Smalltalk environments and the server based workflow in GemStone. We also discuss some interesting issues of using object databases as compared to O/R-mappers. This episode is again a bit longer than the goal we shoot for, but the whole interview was way too interesting to add an artificial break into it.

We decided to keep it up to you to hit the pause button on your MP3-Player when the thrill gets too much for you 😉

You can find the episode on our podcast homepage or on iTunes and on several other podcast feeds.

About Rewriting and our Moronic Predecessors

Over at the schauderhaft blog, people started discussing on whether and why rewriting software might be a good or bad idea.

Some comments go along the lines of:

“What if the guys who wrote the code in the first place were complete idiots and had not the slightest idea what they’re doing?”

You know what: chances are they weren’t, even if you think so. I know I’ve come to the conclusion that moron who wrote this method or that class should be tortured for at least a decade for what he/she did back then quite a few times, hoping they’ll be cooked slowly on some real painful spear for all eternity. But sometimes you find yourself throwing all that perceived crap away and rewriting that stuff “the only clver way to do it”.

And sometimes I found myself getting Trouble Tickets with very strange errors where somehow something wasn’t working any more, or some strange edge case made the application go wild and make its users really, deeply unhappy. Because back then there was a long discussion with the IT guys about why something has to work exactly the way it did just until recently. And trying to reimplement that missing feature I ended up writing surprisingly similar ugly code that was there before. Or I had the time to take a step back and try to get a bigger picture and introduce a redesign. Maybe what they tried to do in a fix was some kind of specialization of what was there, but the risk of introducing a subclass was too high in the light of a near delivery date.

We should keep in mind that legacy code is most likely not the work of mutant neandertals who had just discovered that not all kinds of mouse are edible and keyboards aren’t weapons to kill saberteeth. More often than not they probybly even knew they were writing ugly code, but for a reason. You can probably blame them for not documenting the fact, but on the other hand, would we have read and tried to understand that?

Ten or twelve years ago, many techniques or toolsets weren’t available and the way people solved their problems back then was pragmatic, even if we see that as the wrong way to do it today. But the way they did it was probably very intelligent and clever according to standards that we consider irrelevant today but were state-of-the art back then.

One more thing to keep in mind is that chances are the same developer would probably write completely different code today, knowing what they know now and having the tools at hand today that were science fiction back then. Look at yourself and the code you wrote five years or two weeks ago. How much of it would be the same today?

One of our personal problems as developers or programmers is that we tend to consider ourselves to be geniusses, just because we learned something new or took some hurdles in our daily work that we considered hard and complicated before. And we try to make our work look much better in our own eyes than the one of our colleagues. Maybe that’s because nobody ever says “Thank you for your good work” to us – so we need to do it ourselves. But we overdo sometimes.

I will now even contradict myself to make another point: We tend to think our tools and techniques are so much advanced over the ones people had 5 years ago. But this is often complete nonsense, we are re-inventing the very same things over and over again, pack a bit of cemplexity on top of the last iteration and give it a fancy new name. Just look at the programming language arena: how many languages claim to combine the best of x, y and z and eliminate the weaknesses of a and b? And how many of them introduce something really new? How many stand the test of time?

Rewriting complex software is often nothing else than that: trying to reinvent a ready-made wheel. The low hanging fruit of a new language or platform are so attractive (remember how much endorphine your first “hello world” in javascript produced?) that we tend to forget the essence of rewriting a system is to re-gain all the business and technological knowledge that is buried inside of it. I’ve seen projects fail miserably on this. Without the domain knowledge and a complete set of requirements, your rewrite is a risky endeavour. And I must admit I’ve not yet met a project in which just one of these two was available.

All of this is not to say that rewriting is a bad idea in all cases. But it is well worth trying to understand that turning rusted code into a shining system can be a lot less risk, much cheaper and most likely even provide a much better experience for users and stakeholders. It feels much better to pay for a visibly improving system that makes baby steps to the right direction than for a promised shining new masterpiece that you won’t be able to touch any time soon. And it’s important for us to understand that a black belt in refactoring and sensible redesign is at least as much a sign of MASTERY as being the world’s greatest greenfield programmer. Most rewrite projects aren’t really greenfield anyways and throwing away millions of invested bucks to reinvent the wheel is often not really buying much.

I find myself learning a lot from code that looks bad and ugly in the first place, and I find it rewarding to see I can simplify existing code, clean up a strange design and extend it for new requirements.

Trashing working code is often a rookie mistake and just a sign of unwillingness to understand the reasons for the state of things. But thinking a rewrite keeps you from understanding the domain while making anything better code is making up an expensive illusion. There’s no such thing as a “purely technical” rewrite, because that would mean zero improvement for the deliverable (Meaning: a project you should be fired for).

GLASS podcasts

It’s funny James just released a recording of Dale’s talk about GLASS 2.0 from this year’s ESUG just yesterday. We’ve recorded a podcast episode on Gemstone and GLASS with Norbert Hartl  for our Smalltalk Inspect Podcast (in German) just recently.
It is currently being edited and should be released pretty soon. So if you like what you hear from Dale and would like to learn more about Gemstone/S and GLASS from a user, be sure to check back here or on Smalltalk Inspect.

Sometimes, it’s the little things…

Since the release of VA Smalltalk 8.5, the environment has a nice syntax completion feature. It’s funny how much I miss this little friend every time I work with an older version…

When Dolphin Smalltalk camy up with Intellisense, it really was a surprise. Somebody did it for a dynamic language, and it was way better than you might expect. Then, Pharo included OCompletion which was way snappier than you’d expect. Now VA ST also tries to help as much as it can and it is astonishing how well the suggestions fit. There’s much more intelligence in the tool than just “wait, here’s all implemented methods starting with prin*”.

Whenever I sit at a machine with older VA ST versions, there is this little second in which something should happen but doesn’t. In a project where over time several schemes of naming methods mix together, it is very helpful to get suggestions on possible candidates.

New video on integrating Seaside and jQuery Mobile

Nick’s latest video shows how to build a jQueryMobile application in Seaside with his jQuery Mobile addon he announced a few days ago.

It’s well worth spending 50 minutes to watch Nick code some JavaScript functions with JSON magic right within the Smalltalk Browser and seeing how the parts combine to a nicely done little flickr photo browser. Even if jQuery Mobile is not an immediate target for you, the first half is still a good source to get a feel of how to integrate Seaside with jQuery and Ajax.