Sunday, October 30, 2011

Optional typing

Back home from Splash and recovering. I'll post some of the things I found interesting as I find time over the next few days.

Dart was definitely a big deal at the conference. One smaller thing that was interesting for me was agreeing to participate in a usability test for a Dart tool. I discovered how stupid those kind of tests can make you feel. I did talk to someone who does such usability tests for a different product and they said that a lot more of your brainpower is used up trying to vocalize your thought processes than you realize. So that's my story, and I'm sticking to it.

But the most surprising thing I found was that I was finding the optional types useful. My normal style would be not to put them in at all, but in a couple of that places I had them (because I'd copied the code from somewhere else) they did give me very quick feedback on the errors I was making - much faster and clearer than if I left them out.

Partly that's because I wasn't in my normal toolset, and what I had wasn't as good on runtime error reporting. What I'm used to is a Smalltalk debugger immediately popping up telling you that the method doesn't exist and letting you fix it right there. It's a bit different having your code compiled into Javascript and run in a browser. When there's an error it just quietly does nothing, but if you go into the developer tools in the browser and scroll down far enough in the generated code you can see an error indicating that the method doesn't exist (by which it may just mean that you forgot one of the parameters)

I don't think this is going to turn me into a static typing advocate, but it was an interesting experience.

Gilad Bracha talked about the type system, and one of the good lines from that talk was
Didn't you do all this 18 years ago? Yes, we did this in Strongtalk in 1993 and nobody paid attention. They will pay attention now"

Wednesday, October 26, 2011

Refactoring now possible for dynamic languages

There's an interesting post here from Bob Nystrom about getting used to the optional typing in Dart. But it did have one bit in particular that irked me.

"If it knows the type, then thanks to the previous point, it knows what you can do with it. Ta-da: auto-complete and refactoring are now possible for a dynamic language."

I don't want to pick on Bob, this seems to be one example of the widespread belief that you can't do refactoring in a dynamically typed language, despite the fact that much of the early work on it was done in Smalltalk. The term was actually coined with respect to Forth, as Brian Foote points out here, but it was popularized by the work of Bill Opdyke, John Brant and Don Roberts. The Brant and Roberts Refactoring Browser is currently the standard browser in VisualWorks and ObjectStudio and is the first example I know of automated refactoring support. Thanks to Don Roberts and Brian Foote who happen to be here at the Splash conference and so available to provide the historical information.

It's true that in a dynamic language you have a bit less information to use during refactoring. If we have a polymorphic message and we want to refactor it to, say, rename the method, but only some occurrences, then in a dynamic language we don't have a reliable way to know which senders refer to the ones we want to rename and which refer to the other implementations. So, if we wanted to rename MyClass>>printString we don't have a way to know reliably which senders of printString mean MyClass.

The problem is that we don't have a way to know that reliably in a statically typed language either. We will have more information that might be helpful in some circumstances. But suppose that we use a generic collection List. If we send any messages to the objects in that collection we don't know who the receiver is. So if we want to refactor, it's hard to make any assumptions about who that might be sent to.

Even with inheritance this sort of situation can arise. Suppose I want to rename the method printString in a subclass B whose superclass A also defines that method. If I find that message send to something whose static type is B, I can change the sender. But if if that message is sent to something whose static type is A, what do I do? The problem is that in the presence of multiple polymorphic implementations of the same method, renaming may not be a behaviour-preserving transformation.

I suspect that people using refactoring tools in statically typed languages don't notice these issues because in practice refactoring works fine for them in most normal circumstances. But the same thing is true for people using dynamically typed languages.

And in closing I'll add one comment from Don Roberts, that when he and John Brant looked at refactoring in Java they found that although the static types did give you some more information, the difficulty of satisfying the bookkeeping of the static type system ended up making it more difficult.

Monday, October 24, 2011

Nice line from Splash/OOPSLA

I'm at the Splash conference in Portland. In this morning's Lars Bak talk about the Dart VM, there was a question/answer sequence that went roughly like

Q: What about tail call optimization?
A: Not too likely.  I don't know why you even want that. The biggest problem for me with it is that if you end up in a debugger you can't see where you came from.
Q: Well, you shouldn't be debugging in the first place

That seems to nice sum up some of the differences in approach of different programming communities.

Saturday, October 22, 2011

Number sequence questions on tests

IQ or aptitude tests, the sort of things you end up writing when you're in school, often feature number sequence questions where you have to fill in the next number in sequence. There's a simple trick that makes an enormous number of them much easier, that I learned when I was quite young and always make a point of teaching to children that I know. It's just applying successive differences. So, for example,
1  4  9  16 25 ...
  3  5  7  9 
16  22  34  58 106 ...
  6  12   24  48
Mathematically, for any polynomial taking the differences reduces the degree of the polynomial, which for most of the polynomials you're likely to encounter in test contexts will reduce to a constant in a couple within two or three iterations.

So polynomials can be done mechanically, but the technique works better than that. A fibonacci sequence reduces to itself
1  1  2  3  5  8 ...
 0  1  1  2  3 
and so do exponentials, with any constant or polynomial terms reducing as they would have on their own
1   2   4   8   16   32 ...
 1    2   4   8    16
and in general, sequences that aren't very tricky rapidly become obvious. I think I remember learning about this trick by reading a Mathematical Games column that talked about a computer program designed to take IQ tests that only knew how to do number sequences and shape similarities.

To make sure I haven't been deluding myself and needlessly bothering young relatives I found a set of questions at and ran through them all with that method. These were the ones ranked as medium difficulty. There were 13 questions, and there were 3 of them that this technique didn't get.

The trickiest one that it did get was
7  21  14  42  28 ...
 14  -7  28  -14 
so two alternating sequences, each doubling. Or at least that's the interpretation I took, and it agreed with the test writers. There are an infinite number of patterns that fit any particular sequence, so agreeing with the test writer as to the simplest one is the important criteria. I'm sure there's a point to be made about such tests there.

The sequences it didn't get were 
75  15  25   5  15...
 -60  10  -20  10
which I wasn't sure of the proper answer to, but the internet suggests an alternating series of divide by 5 and add 10. 
1   2   6  24  120...
  1   4  18  96
    3   14  78
where each term is the previous term multiplied by n. And
183  305  527  749  961...
  222  222  222  212 
and where the trick is not consider the numbers as integers but rather to consider each digit separately, so the next term is 183 again as all the digits wrap back around to their original values.

Obviously there are a lot of more complex sequences that this technique won't help with, but for test questions it's extraordinarily useful. 

Sunday, October 16, 2011

Dart: Block return revisited

One of the things that was in my initial list of things I missed in Dart was the ability to return from a closure and return to the original outer scope, as Smalltalk blocks do. Thats's important if you're defining all your control structures in terms of closures. e.g.
   input isEmpty ifTrue: [^self]
That doesn't do you a lot of good as a guard clause if the return only returns from inside the block and just continues execution on the next line. Another place non-local return is useful is to short-circuit collection iteration. So to find the first element in a collection that satisfies some condition we can write it as
   detect: aBlock
      aCollection do: [:each | (aBlock value: each) ifTrue: [^each]].
      ^'not found'
The if statement motivation isn't so important in Dart because they have "if" as a syntactic construct and if you return from within the action clause it's part of the same method, not in a closure.
For iteration, Dart has three mechanisms, two of them syntactic, and one that's just a method. The first is the old-style "for" loop
   for (i=0;i<=something;i++) { print(i); }
The second is also a "for" loop, but with special syntax that iterates over each element of a collection
   for (var each in aCollection) { print(each)}
Finally, there's a forEach method listed in the Collection interface that takes a closure and applies it for each element
    aCollection.forEach( (each) => print(each);
With the first two forms they're part of the syntax, so a return statement will break out of the loop. But if you're invoking the forEach method then there isn't a way to break out of it.

The interesting bit I discovered today is that if you write your own collection classes, you can still use any of the forms. The implementation of the second form is that it sends the iterator() message to the collection and then uses that iterator to loop. So I could write a trivial binary tree class, define an Iterator (two methods: hasNext() and next()) for it and write
   for (var x in tree) { print(x); }
and it works fine.

I have to say both that I'm impressed that that works and that this takes a bit of the edge off my wanting non-local return. There are other uses, but I do have to admit that it's a complicated feature with some difficult edge cases, and being able to use it this way does take the air out of the most obvious motivating use case. Wanting to have an at:ifAbsent: where in the ifAbsent case I return from the original outer scope might be useful, but it's not nearly as good an example.

Dart keyword arguments

Coming from a Smalltalk background, one of the things I like in it is the message format, with keywords with colons separating the arguments rather than mathematical function syntax. So, e.g.
    aDictionary at: 12 ifAbsent: ['default'].

Dart has operators you can define as messsages, so their default syntax would be

But that's not the same operation, as it doesn't have the ability to say what to do if the key is missing. It does define

but that's not the same as at:ifAbsent:, both because it always adds something to the map/dictionary, and because the thing you're adding is a value, not the result of evaluating a block.
However, Dart also has named keyword arguments to methods. They're not shown, as far as I saw, in the introductory materials on the site, and I didn't notice them in the sample code that I looked at, but they're in the spec. The form is
   at(var key, [ifAbsent]) {
     if (containsKey(key)) return this[key];
     if (ifAbsent is Function) 
        return ifAbsent(); 
        return ifAbsent;

and to invoke it we'd do something like
  Map aMap = { "one": 1, "two":2};"one",ifAbsent: "default");

or, if we felt like formatting it slightly differently, it seems that there can be whitespace between the dot and the method, so
     ifAbsent: "default"
     at: 'one',
     ifAbsent: ()=> throw SomeException;

Both of those latter forms, while they have some additional bits of punctuation, look interestingly like syntax I'm more used to. Named parameters can also have default values, which could also be useful, though I tried
  someFunction(var a, [b= (var x)=>x * 2])
and while it didn't complain of a syntax error, variable b was null if not specified. That might just be a compiler bug, or it may be that a closure literal isn't allowed there, I'm not sure.
I haven't tried using these on any kind of scale to see how usable they would be in practice, and they obviously aren't the preferred style in the examples, but I found it interesting. And as a minor point, it's odd that for normal parameters I have to specify either a type or "var" but that doesn't seem to be necessary for the named parameters.

Friday, October 14, 2011

STIC 2012 Call For Participation

The Smalltalk Industry Conference (previously known as Smalltalk Solutions) 2012 Call For Participation is now out. The web site appears to be in a poor state right now, so pasting the whole thing here.

Smalltalk Solutions is now called STIC – Smalltalk Industry Conference.

STIC is a forum where Smalltalk professionals, researchers, and enthusiasts can meet and share ideas and experiences. We are currently accepting proposals for talks involving Smalltalk technology and other areas of innovation in the software industry. We're looking forward to an excellent conference, and need your participation to maintain the high technical level of the conference!

The conference will take place in Biloxi, Mississippi, March 19 – 21, 2012.
Presentations will have 45 minutes time slots including discussion. They may be in the form of
  • Technical Presentations
  • Experience Reports
  • Technology Demonstrations
  • Panel Discussions
  • Workshops
  • ...
Proposals should be submitted by email to and should include the following information:
  • Name
  • Contact Information
  • Type of Presentation
  • Title of Presentation
  • Brief Abstract
  • Short Biography of the Presenter(s)
  • Any constraints on date/time
  • Any other information of importance in evaluating the proposal
If you cannot discuss the internal application you are working on because of corporate restrictions, perhaps you can discuss the application's component usage or development process. We will also be reserving time for short presentations, of the form of Lightning Talks and (very) short technology demonstrations, but these will be available for sign-up at the conference rather than as advance proposals.
Submissions should be received by December 15, 2011. Note that submissions with incomplete information may be rejected - particularly if bio or abstract information is missing.

Presenters will qualify for a significantly discounted registration. This year that will be $200 US, or $160 US for STIC members.
For announcements of the conference see and
Georg Heeg
STIC - Smalltalk Industry Council
Executive Director
Phone +49-3496-214328, Fax +49-3496-214712

Monday, October 10, 2011

Google+ Useful

I got an invite to Google+ fairly early on, but up until now I haven't found it all that useful. Right now my ratio of technical to non-technical people in circles stands at 42:1. So it acts a lot like yet another technical news service. But the other day I was looking at some of the posts people made and there were several about talks at the upcoming Splash conference, which combines a number of other conferences, including OOPSLA. I hadn't been to OOPSLA in a few years, but there were some very interesting talks, including Daniel Weinreb on language extensibility in Object-Relational mapping (I don't care if you don't find O/R mapping interesting - I do :-) interesting massively multi-processor research in Smalltalk with Dave Ungar, Ivan Sutherland on getting away from the "prison" sequential computing and a talk from the always-interesting Dave Thomas (OTI / Bedarra) on why modern application development sucks. Lars Bak of newfound Dart fame will also be in attendance.

So now I'm going to go, and it's more or less because of Google+.


Today is Thanksgiving in Canada, but the relatives were all over yesterday, so that gave me a lot of time to spend looking at Google's new "Dart" language, introduced this morning at the GOTO conference in Aarhus.

So here are my initial thoughts and questions, given that I've really only just read stuff and written a few code snippets. I'll try to keep it a little beyond just complaining that it isn't my favourite language :-) Overall, it's clearly very early days for it, and there are many things yet to be done, but it looks like it does a lot of things right and has some serious potential.

Open Questions

  • What sort of tool support is there? I've seen that talked about, but not being at the keynote I haven't seen it demoed yet.
  • How exactly does the exception model work? One of the things I find most useful in Smalltalk is that exception handling is done in two phases. First, we find the handler and run it, and only after that do we unwind the stack. That means that when developing, an unhandled exception can put us right into the debugger at the point of the exception and with the ability to see the code and modify it. There are hints in the spec that suggest Dart is able to do this. The exception handler definitely gets a strong representation of the stack which "becomes undefined" after it's finished. I'd like to know more. Exceptions seem to be able to be any object, though many of the exceptions are classes. That seems like it might run into problems when you start wanting to be able to consistently ask certain things of an exception, but maybe the things you want to ask aren't really the exception object but the other things that come along into the handler.
  • Can I do some equivalent of block return? So, for example, the Dart collection library includes some basic iteration methods. There's a forEach, and there's a filter (think #select:) but there isn't a detect:. How would I write an equivalent of detect: in terms of a provided forEach method, since the return from a function just returns from the inner block. Maybe I could do it with an exception, but that seems awfully ugly. Or I could save the value and let it just keep going through the rest of the collection, but I don't want to do that. And that leads into...
  • Can I add my own control structures? And if so, how pleasant will they be to use? This has a few pieces. Can I extend existing classes? Javascript lets me do this easily, as does Smalltalk. Newspeak doesn't, on philosophical grounds. But to me this seems awfully useful for being able to define my own control structures and other language elements, and that's an important piece of being able to extend the language into its own DSL.
  • In general, I find myself thinking about typical Smalltalk tools and wondering how doable it would be to write them in Dart. Maybe I'll give it a try and find out.

Things I Like

  • Everything's an object. No primitive types and no mismatched operations because of it.
  • Most things are message sends. There's still too much syntax for my taste, but most of the important stuff goes through message sends.
  • Unlimited size integers
  • There's a doesNotUnderstand: equivalent (noSuchMessage)
  • There's a fairly significant type system. You can completely ignore it.
  • The setters and properties are nice. You can reference things as properties, and they go through the get/set methods if you've written them.
  • Also, there's operator overloading, but in the sense that operators are really just methods, and it lets you write some nice things like
  • things.stuff[1]="Foo"
  • Types like "int" are actually interfaces. Though some people were arguing that it's confusing to have them as lowercase.
  • A late addition - no separate compile step.

Things I'm Not Quite Sure About

  • The concurrency model with single-threading, but the ability to spawn actor-style isolates sounds quite interesting. That'll be interesting to see.
  • The default return value is null, not self. This is philosophically because if you don't explicitly return something and you try to make use of the return value it's probably a mistake and you want it to fail fast. But I'm not sure this wouldn't lead to a lot of extra statements in methods if you do tend to make use of the return values.
  • There's an interpolated string form that they seem to favour, where you can embed ${expression} into a string. But one comment that I read argued that this wasn't as useful as being able to put in things more like positional arguments and then bind that to a set of values, and I think they have a point.
  • Reflection is still a to-do list item.
  • The syntax for functions is pretty lightweight. In the simplest case, something like (excuse my formatting)
     () => 3;
     (aNumber) => aNumber + 1;
and the more complex cases where it's not a single expression in the body are
     (aNumber) { print(aNumber); return aNumber + 1;};
That's nicely short. I find the parentheses to be a bit of syntactic noise that bothers me more than the very simple [3], but I might be able to live with it.

Things I Could Wish For
  • There's no become:, but then I wouldn't really have expected to see one. And I'd have rather had keyword argument syntax, but I didn't really expect that either.
  • Classes aren't objects. There are "static" methods but they aren't really class methods, and you can't have an expression that evaluates to a class. They argue that the class/instance method distinction has been shown to be confusing for users. There is a fairly sophisticated mechanism for constructors which might be enough for the most common uses of class methods, it's not clear.
  • The reserved words seem a bit more intrusive than they need to be. Or maybe it's just that I got bitten trying to write a method named "do" :-)
  • It seems like only certain things are Hashable and thus eligible as Map keys. That seems restrictive. But I might just be confused about that one.
  • There's a lot of spec space and presumably a lot of mental cycles spent on the static typing system. I'd just as soon that energy went on more worthwhile things :-)  Mostly you can ignore it, but I did run across one interesting case. There's a for loop construct that will let you loop over a collection. It requires you to put in a type, but ignores it. So I can't leave out the "int" in
  • main() {
      var a = [1,2,3];
      for(int x in a) print(x);
    but I can put in nonsense and it still runs
    main() {
      var a = [1,2,3];
      for(Exception x in a) print(x);

Thursday, October 6, 2011

Google sues itself, more or less

This is an interesting twist, a Google-backed "non-practicing entity" patent holder suing Motorola, which Google is currently trying to acquire. In the current environment, where big companies build up big portfolios of patents to deter infringement, it seems to me that it's a real advantage to be a non-practicing entity so that you're immune to that defence. You can't be countersued for the patents your own products infringe, because you don't have any products. So ultimately its a win for pure parasitism over anyone actually trying to produce something.

The last few years I've been involved with a few different patent lawsuits where things that were done years ago in TOPLink or in Smalltalk might serve as prior art. When you talk to the lawyers they're generally quite open about the brokenness in the current patent system, but until something changes what can you do but play along and charge lots of money for doing so.

Monday, October 3, 2011

Principles of OO Design, or Everything I Know About Programming, I Learned from Dilbert

Back in the last century, I wrote a column for the magazine "The Smalltalk Report". One of the articles I wrote has been enduringly popular and I still get the occasional person asking about it. So here it is yet again.

Principles of OO Design
Everything I know about programming, I learned from Dilbert

Everyone knows that objects and object-oriented design are the hottest things since sliced bread (and of course, slices of bread are objects). The problem is that it’s hard to agree on what exactly they are. There have been many attempts to define the principles of OO design and coding, with varying degrees of success. In my opinion, most of them suffer from two flaws. The first is that they don’t tell me enough about how to code. Reading a definition of polymorphism doesn’t tell me how to exploit it in my programs. The second, and more important problem, is that they’re dull. Even if the definition of polymorphism did tell me how to code, it’s hard to stay awake long enough to finish reading it.

Therefore, I modestly present some of my own principles of OO-ness, which I hope address both of these flaws. Furthermore, I believe that these principles relate well to the corporate environments that are currently adopting OO principles.

1) Never do any work that you can get someone else to do for you
This is always good advice, but it’s particularly applicable in OO. In fact, I consider it the fundamental principle of OO. As an object, my responsibilities are very clearly defined, and so are those of my co-workers. If something is (or ought to be) one of their responsibilities, then I shouldn’t try to do that work myself. 

Let’s look at a concrete example
   total := 0
   aPlant billings do: [:each |
      (each status == #paid and: [each date > startDate])
         ifTrue: [total := total + each amount]].
   total := aPlant totalBillingsPaidSince: startDate.

In the first case we’re asking the plant for all of its billings, figuring out for ourselves which ones qualify, and computing the total. That’s a lot of work, and almost none of it is our job. Far better to use the second option, where we simply ask for something to be done and get a result back. In real-world terms, the conversation might look like
“Excuse me Smithers. I need to know the total bills that have been paid so far this quarter. No, don’t trouble yourself. If you’ll just lend me the key to your filing cabinet I’ll go through  the records myself. I’m not that familiar with your filing system, but how complicated can it be? I’ll try not to make too much of a mess.”
Smithers actually understands his filing system, so he can probably do the work faster than we can, and he’s much less likely to mess everything up. In seeking to do his job for him, we’re just making things worse. They’ll get a lot worse when he switches over to that new filing system next week. We’d be far better off with the stereotypical tyrant boss.
“SMITHERS! I need the total bills that have been paid since the beginning of the quarter. No, I’m not interested in the petty details of your filing system. I want that total, and I’ll expect it on my desk within the next half millisecond.”
Let’s look at a simpler example, which is all too common.
   somebody clients add: Client new.
   somebody addClient: Client new.

There’s always a temptation to choose the first, since it saves writing a couple of methods that do nothing but adds and deletes on the other class. But deep down you know it’s wrong. You’re trying to do somebody’s work for them, and in the end it’s only going to cause problems. Writing those extra methods puts the responsibility where it belongs, and will make the code cleaner in the long run.

This principle is close to the more conventional idea of encapsulation, but I like to think it makes the idea a bit clearer. I often see people who are happily manipulating the internal state of another object, but think it’s OK because they’re doing it all through messages. Encapsulation is not just about accessing state, it’s about responsibilities. Responsibility is about who gets stuck doing the real work.

2) Avoid responsibility
If responsibilities are about getting stuck with work, it’s important to avoid them. This has some important corollaries
  1. If you must accept a responsibility, keep it as vague as possible.
  2. For any responsibility you accept, try to pass the real work off to somebody else.
Our first principle tells us to take advantage of other objects when writing code. We also have to avoid being taken advantage of. Any time I (as an object) an tempted to accept a responsibility, I should ask myself “Is this really my job?” and “Can’t I get someone else to do this?”

If I do accept a responsibility, it’s important to keep it as vague as possible. If I'm lucky, this vagueness will help me get out of really doing the work later. Even if I do have to do the work, it may let me take some shortcuts without anybody else noticing.  

For example, I’ve seen objects with responsibilities described as

     Maintain a collection of the whosits to be framified

This is much too specific. My job isn’t to maintain a collection, it’s to be able to report, when necessary, which whosits need framification. That may be implemented by maintaining a collection, it may be implemented by asking one or more other objects  for their collection(s), it may be hard-coded, or it may be computed dynamically as
   Whosit allInstances select:
No matter which of these options I choose, there shouldn’t be any impact on my responsibilities.
My preference for phrasing a responsibility of this kind is
     “Know which ...”
but I’m flexible as long as the phrasing is suitably vague. I’d probably be even happier with 
     “Be able to report which ...”

Now, this is all very well, but carried to the extreme, it seems that this could lead to the situation where everyone passes information around and nothing ever gets done. Exactly. Object bureaucracy at it’s finest.

Seriously, a good OO system can actually approach this state. Each object will do a seemingly insignificant amount of work, but somehow they add up to something much larger. You can end up tracing through the system, looking for the place where a certain calculation happens, only to realize that the calculation has been done and you just didn’t notice it happening.

3) Postpone decisions
The great virtue of software is flexibility. One of the ways we achieve flexibility is through late binding. We most often talk about late binding between a method name and the method it invokes, but it’s also important in other contexts. When faced with a decision, we can gain flexibility by postponing it. The remaining code just needs to be made flexible enough to deal with any of the possible outcomes.

The ideal is when we can avoid making the decision at all, leaving it up to someone else (the end-user, other objects). For example, consider the question of how to implement dictionaries. The standard thing to do is use a hash table. That works well for medium-sized collections, but it’s a waste of space and effort for very small collections. For very large collections, it may also be wasteful, particularly if the number of elements exceeds the resolution of our hash function. We have to make a decision here, so we’d like to postpone it or pass it off to someone else.

Some implementations of the collection classes do precisely this. The collections pass off much of their behavior to an implementation collection which actually does the work. Depending on the size, the nature of that collection can change. In VisualAge 2.0, small dictionaries would be stored as arrays since the overhead of hashing was more than the cost of a linear search. Larger dictionaries could be represented as either normal or bucketed hash tables. Unfortunately, postponing this particular decision ended up leading to worse performance on the average cases, and the scheme was abandoned in VisualAge 3.0. This makes it not only a good example of the principle, but an illustration of when you can take it too far. Postponing decisions can have a performance cost.

There are other possible costs to this kind of principle. Decisions aren’t just sources of problems, they give us the power to solve our own problems. Since we can’t solve all the problems of the world at once, we make the decision to limit ourselves, and we make assumptions about the problems we’ll be given. This makes our code simpler, easier to write, and faster. The problem arises when it turns out our decisions were bad, or our assumptions don’t hold any more. The trick is to make enough decisions to be able to work, but few enough that our code doesn’t become brittle. That’s one of the things that makes software hard, and makes the ability to change software important. That’s one of the reasons I like Smalltalk, because it makes changing software much easier than any other environment I’ve seen.

4) Managers don’t do any real work
The subject of “manager” or “control” objects can provoke a lot of debate in OO circles, much as the subject of “managers” does in other work environments. Some argue that they are inherently un-productive and should be eliminated.  Others argue that, although they may represent a throwback to outdated ways of thinking, they can be very useful under the right circumstances.

I definitely believe that managers can be useful, but it’s important to distinguish between good ones and bad ones. For example, consider a program in which most of my classes are “record objects” (objects whose only behaviours are get and set methods). The real work is done by a control class which manipulates these objects, with full access to all their data. At this point I have a procedural program dressed up in an OO disguise. The control object is in the most complete possible violation of the fundamental principle, since it’s trying to do all the work itself.

On the other hand, consider a window class like the VisualWorks ApplicationModel or the WindowBuilder WbApplication class. These are manager objects that coordinate the interactions between user interface widgets and the domain model. They server as a vital “glue” layer (although I prefer to think of it as duct tape) and it would be much harder to get a clean design without them. This pattern is less clear in VisualAge, since the glue lies in a number of different “Part” classes, but it’s there.

People who are vehemently opposed to any kind of manager object are often stuck in the trap of trying to precisely model the world, taking the OO paradigm much too literally. One of my favourite quotes on this subject (from several years back) is from Jeff Alger, who wrote:
"The real world is the problem; why would you want to just simulate it?"
How can we tell a good manager object from a bad one? We apply the principle that managers don’t do real work. A manager object should manage interactions between other objects. It should not be trying to do work itself, unless it’s legitimate management work.

An example of legitimate management work is an ApplicationModel figuring out which menu items need to be disabled. An example of non-legitimate work would be doing (non-trivial) calculations of values to be displayed in its fields. Those values should be calculated by the domain objects.

This rule can be tricky to apply in practice. It can be difficult to decide if something is legitimate management work or not. Always remember that this is just a specific application of the fundamental principle. If the manager can plausibly get someone else to do the work, it should do so.

Another difficulty is that the word “Manager” is sometimes tacked on to the end of a class name even though what it describes is not a manager at all. In one comp.object discussion Robert Cowham ( described a DiscountPolicyManager object, and worried about the desirability of introducing a manager object, even though it seemed to make the design cleaner. The description was as follows:
A Discount Policy Manager is going to be passed, say, an Invoice object, and will calculate the appropriate discount to be applied to that Invoice (using methods on the Invoice to find out about it) and then use a method on Invoice to add the discount to it.
Reading this description, it’s clear that the DiscountPolicyManager is really just a policy object as described in the previous section. It isn’t a manager at all, and should be called DiscountPolicy instead.

5) Premature optimization leaves everyone unsatisfied
The most fun you can have as a programmer is optimizing code. There’s nothing quite so satisfying as taking some little piece of functionality and making it run 50 times faster than it used to. When you’re deep in the middle of meaningless chores like commenting, testing, and documenting the temptation to let go and optimize is almost irresistible. You know it’s got to be done sometime, and you feel like you just can’t put it off any longer. Sometimes you’re right and the time has come to make this piece of code really scream. More often than not, you’ll be happier in the long run if you can just hold off a little longer.

There are several reasons. First of all, time spent on optimization isn’t being spent on those “meaningless” chores, which are often more important to the success of the project. If testing and documentation are inadequate, most people won’t notice or care how fast a particular list box updates. They’ll have given up on the program before they ever got to that window.

That’s not the worst of it. Premature optimization is usually in direct violation of the principle of postponing decisions. Optimization often involves thoughts like “if we restrict those to be integers in the range from 3 to 87, then we can make this a ByteArray and replace these dictionary lookups with array accesses”. The problem is that we’ve made our code less clear and we’ve greatly reduced its flexibility. It may have felt really good at the time, but the other people involved in the project may not be entirely satisfied.

Of course this rule doesn’t apply to all optimizations. Most programs will need some optimization sometime, and this is particularly true in Smalltalk. As a very high-level language, Smalltalk makes it very easy to write very inefficient programs very quickly. A little bit of well-placed optimization can make the code enormously faster without harming the program.

There’s also a large class of optimizations that I call “stupidity removal” which can be profitably done at just about any time. These include things like using the right kind of collection for the job and avoiding duplicated work. Their most important characteristic is that they should also result in improvements to the clarity and elegance of the code. Using better algorithms (as long as their details don’t show through the layers of abstraction) can also fall into this category.

Other Rules To Live By
There are a lot of other rules of life that can be extended to the OO design and programming domains. Here are a few more examples. Feel free to make up more and send them to me. Make posters out of them and put them up on your office wall. It’ll make a nice counterpoint to those insipid posters about “Teamwork” and “Quality” that seem to be everywhere these days.
·     Try not to care - Beginning Smalltalk programmers often have trouble because they think they need to understand all the details of how a thing works before they can use it. This means it takes quite a while before they can master Transcript show: ‘Hello World’. One of the great leaps in OO is to be able to answer the question “How does this work?” with “I don’t care”.
·     Just do it! - An excellent slogan for projects that are suffering from analysis paralysis, the inability to do anything but generate reports and diagrams for what they’re eventually going to do.
·     Avoid commitment - This is another way of expressing the principle of postponing decisions, but one which might strike a chord with younger or unmarried programmers.
·     It’s not a good example if it doesn’t work - This one comes from Simberon's David Buck (, who’s fed up with looking at example and test methods that haven’t been properly maintained as the code evolved. I can’t think of a way to apply this on to life, but it’s good advice anyway.
·     Steal everything you can from your parents - A principle for those trying to make effective use of inheritance or moving into their first apartment.
·     Cover your ass - Like in a bureaucracy, the most important thing is to make sure that it isn’t your fault. Make sure your code won’t have a problem even if things are going badly wrong elsewhere.

My original byline for this stated that "Alan Knight avoids responsibility with The Object People", but nowadays I'm, well, um, a manager at Cincom Systems. I'm trying not to think about the OO design implications of that.