Principles of OO Design
or
Everything I know about programming, I learned from Dilbert
Everything I know about programming, I learned from Dilbert
Everyone knows that objects and object-oriented design are the hottest things since sliced bread (and of course, slices of bread are objects). The problem is that it’s hard to agree on what exactly they are. There have been many attempts to define the principles of OO design and coding, with varying degrees of success. In my opinion, most of them suffer from two flaws. The first is that they don’t tell me enough about how to code. Reading a definition of polymorphism doesn’t tell me how to exploit it in my programs. The second, and more important problem, is that they’re dull. Even if the definition of polymorphism did tell me how to code, it’s hard to stay awake long enough to finish reading it.
Therefore, I modestly present some of my own principles of OO-ness, which I hope address both of these flaws. Furthermore, I believe that these principles relate well to the corporate environments that are currently adopting OO principles.
Let’s look at a concrete example
1) Never do any work that you can get someone else to do for you
This is always good advice, but it’s particularly applicable in OO. In fact, I consider it the fundamental principle of OO. As an object, my responsibilities are very clearly defined, and so are those of my co-workers. If something is (or ought to be) one of their responsibilities, then I shouldn’t try to do that work myself.
Let’s look at a concrete example
total := 0
aPlant billings do: [:each |
(each status == #paid and: [each date > startDate])
ifTrue: [total := total + each amount]].
versus total := aPlant totalBillingsPaidSince: startDate.
In the first case we’re asking the plant for all of its billings, figuring out for ourselves which ones qualify, and computing the total. That’s a lot of work, and almost none of it is our job. Far better to use the second option, where we simply ask for something to be done and get a result back. In real-world terms, the conversation might look like
“Excuse me Smithers. I need to know the total bills that have been paid so far this quarter. No, don’t trouble yourself. If you’ll just lend me the key to your filing cabinet I’ll go through the records myself. I’m not that familiar with your filing system, but how complicated can it be? I’ll try not to make too much of a mess.”Smithers actually understands his filing system, so he can probably do the work faster than we can, and he’s much less likely to mess everything up. In seeking to do his job for him, we’re just making things worse. They’ll get a lot worse when he switches over to that new filing system next week. We’d be far better off with the stereotypical tyrant boss.
“SMITHERS! I need the total bills that have been paid since the beginning of the quarter. No, I’m not interested in the petty details of your filing system. I want that total, and I’ll expect it on my desk within the next half millisecond.”Let’s look at a simpler example, which is all too common.
somebody clients add: Client new.
versus
somebody addClient: Client new.
There’s always a temptation to choose the first, since it saves writing a couple of methods that do nothing but adds and deletes on the other class. But deep down you know it’s wrong. You’re trying to do somebody’s work for them, and in the end it’s only going to cause problems. Writing those extra methods puts the responsibility where it belongs, and will make the code cleaner in the long run.
This principle is close to the more conventional idea of encapsulation, but I like to think it makes the idea a bit clearer. I often see people who are happily manipulating the internal state of another object, but think it’s OK because they’re doing it all through messages. Encapsulation is not just about accessing state, it’s about responsibilities. Responsibility is about who gets stuck doing the real work.
2) Avoid responsibility
If responsibilities are about getting stuck with work, it’s important to avoid them. This has some important corollaries
- If you must accept a responsibility, keep it as vague as possible.
- For any responsibility you accept, try to pass the real work off to somebody else.
Our first principle tells us to take advantage of other objects when writing code. We also have to avoid being taken advantage of. Any time I (as an object) an tempted to accept a responsibility, I should ask myself “Is this really my job?” and “Can’t I get someone else to do this?”
If I do accept a responsibility, it’s important to keep it as vague as possible. If I'm lucky, this vagueness will help me get out of really doing the work later. Even if I do have to do the work, it may let me take some shortcuts without anybody else noticing.
For example, I’ve seen objects with responsibilities described as
Maintain a collection of the whosits to be framified
This is much too specific. My job isn’t to maintain a collection, it’s to be able to report, when necessary, which whosits need framification. That may be implemented by maintaining a collection, it may be implemented by asking one or more other objects for their collection(s), it may be hard-coded, or it may be computed dynamically as
Whosit allInstances select:
No matter which of these options I choose, there shouldn’t be any impact on my responsibilities.
My preference for phrasing a responsibility of this kind is
“Know which ...”
but I’m flexible as long as the phrasing is suitably vague. I’d probably be even happier with
“Be able to report which ...”
Now, this is all very well, but carried to the extreme, it seems that this could lead to the situation where everyone passes information around and nothing ever gets done. Exactly. Object bureaucracy at it’s finest.
Seriously, a good OO system can actually approach this state. Each object will do a seemingly insignificant amount of work, but somehow they add up to something much larger. You can end up tracing through the system, looking for the place where a certain calculation happens, only to realize that the calculation has been done and you just didn’t notice it happening.
3) Postpone decisions
The great virtue of software is flexibility. One of the ways we achieve flexibility is through late binding. We most often talk about late binding between a method name and the method it invokes, but it’s also important in other contexts. When faced with a decision, we can gain flexibility by postponing it. The remaining code just needs to be made flexible enough to deal with any of the possible outcomes.
The ideal is when we can avoid making the decision at all, leaving it up to someone else (the end-user, other objects). For example, consider the question of how to implement dictionaries. The standard thing to do is use a hash table. That works well for medium-sized collections, but it’s a waste of space and effort for very small collections. For very large collections, it may also be wasteful, particularly if the number of elements exceeds the resolution of our hash function. We have to make a decision here, so we’d like to postpone it or pass it off to someone else.
Some implementations of the collection classes do precisely this. The collections pass off much of their behavior to an implementation collection which actually does the work. Depending on the size, the nature of that collection can change. In VisualAge 2.0, small dictionaries would be stored as arrays since the overhead of hashing was more than the cost of a linear search. Larger dictionaries could be represented as either normal or bucketed hash tables. Unfortunately, postponing this particular decision ended up leading to worse performance on the average cases, and the scheme was abandoned in VisualAge 3.0. This makes it not only a good example of the principle, but an illustration of when you can take it too far. Postponing decisions can have a performance cost.
There are other possible costs to this kind of principle. Decisions aren’t just sources of problems, they give us the power to solve our own problems. Since we can’t solve all the problems of the world at once, we make the decision to limit ourselves, and we make assumptions about the problems we’ll be given. This makes our code simpler, easier to write, and faster. The problem arises when it turns out our decisions were bad, or our assumptions don’t hold any more. The trick is to make enough decisions to be able to work, but few enough that our code doesn’t become brittle. That’s one of the things that makes software hard, and makes the ability to change software important. That’s one of the reasons I like Smalltalk, because it makes changing software much easier than any other environment I’ve seen.
4) Managers don’t do any real
work
The subject of “manager” or “control” objects can provoke a
lot of debate in OO circles, much as the subject of “managers” does in other
work environments. Some argue that they are inherently un-productive and should
be eliminated. Others argue that,
although they may represent a throwback to outdated ways of thinking, they can
be very useful under the right circumstances.
I definitely believe that managers can be useful, but it’s
important to distinguish between good ones and bad ones. For example, consider
a program in which most of my classes are “record objects” (objects whose only
behaviours are get and set methods). The real work is done by a control class
which manipulates these objects, with full access to all their data. At this
point I have a procedural program dressed up in an OO disguise. The control
object is in the most complete possible violation of the fundamental principle,
since it’s trying to do all the work itself.
On the other hand, consider a window class like the
VisualWorks ApplicationModel or the WindowBuilder WbApplication class. These
are manager objects that coordinate the interactions between user interface
widgets and the domain model. They server as a vital “glue” layer (although I
prefer to think of it as duct tape) and it would be much harder to get a clean
design without them. This pattern is less clear in VisualAge, since the glue
lies in a number of different “Part” classes, but it’s there.
People who are vehemently opposed to any kind of manager
object are often stuck in the trap of trying to precisely model the world,
taking the OO paradigm much too literally. One of my favourite quotes on this
subject (from several years back) is from Jeff Alger, who wrote:
"The real world is the problem; why would you want to just simulate it?"
How can we tell a good manager object from a bad one? We
apply the principle that managers don’t do real work. A manager object should
manage interactions between other objects. It should not be trying to do work
itself, unless it’s legitimate management work.
An example of legitimate management work is an
ApplicationModel figuring out which menu items need to be disabled. An example
of non-legitimate work would be doing (non-trivial) calculations of values to
be displayed in its fields. Those values should be calculated by the domain
objects.
This rule can be tricky to apply in practice. It can be
difficult to decide if something is legitimate management work or not. Always
remember that this is just a specific application of the fundamental principle.
If the manager can plausibly get someone else to do the work, it should do so.
Another difficulty is that the word “Manager” is sometimes
tacked on to the end of a class name even though what it describes is not a
manager at all. In one comp.object discussion Robert Cowham
(cowhamr@logica.com) described a DiscountPolicyManager object, and worried
about the desirability of introducing a manager object, even though it seemed
to make the design cleaner. The description was as follows:
A Discount Policy Manager is going to be passed, say, an Invoice object, and will calculate the appropriate discount to be applied to that Invoice (using methods on the Invoice to find out about it) and then use a method on Invoice to add the discount to it.
Reading this description, it’s clear that the
DiscountPolicyManager is really just a policy object as described in the
previous section. It isn’t a manager at all, and should be called
DiscountPolicy instead.
5) Premature optimization
leaves everyone unsatisfied
The most fun you can have as a programmer is optimizing
code. There’s nothing quite so satisfying as taking some little piece of
functionality and making it run 50 times faster than it used to. When you’re
deep in the middle of meaningless chores like commenting, testing, and
documenting the temptation to let go and optimize is almost irresistible. You
know it’s got to be done sometime, and you feel like you just can’t put it off
any longer. Sometimes you’re right and the time has come to make this piece of
code really scream. More often than not, you’ll be happier in the long run if
you can just hold off a little longer.
There are several reasons. First of all, time spent on
optimization isn’t being spent on those “meaningless” chores, which are often
more important to the success of the project. If testing and documentation are
inadequate, most people won’t notice or care how fast a particular list box
updates. They’ll have given up on the program before they ever got to that
window.
That’s not the worst of it. Premature optimization is
usually in direct violation of the principle of postponing decisions. Optimization
often involves thoughts like “if we restrict those to be integers in the range
from 3 to 87, then we can make this a ByteArray and replace these dictionary
lookups with array accesses”. The problem is that we’ve made our code less
clear and we’ve greatly reduced its flexibility. It may have felt really good
at the time, but the other people involved in the project may not be entirely
satisfied.
Of course this rule doesn’t apply to all optimizations. Most
programs will need some optimization sometime, and this is particularly true in
Smalltalk. As a very high-level language, Smalltalk makes it very easy to write
very inefficient programs very quickly. A little bit of well-placed
optimization can make the code enormously faster without harming the program.
There’s also a large class of optimizations that I call
“stupidity removal” which can be profitably done at just about any time. These
include things like using the right kind of collection for the job and avoiding
duplicated work. Their most important characteristic is that they should also
result in improvements to the clarity and elegance of the code. Using better
algorithms (as long as their details don’t show through the layers of
abstraction) can also fall into this category.
Other Rules To Live By
There are a lot of other rules of life that can be extended
to the OO design and programming domains. Here are a few more examples. Feel
free to make up more and send them to me. Make posters out of them and put them
up on your office wall. It’ll make a nice counterpoint to those insipid posters
about “Teamwork” and “Quality” that seem to be everywhere these days.
·
Try not
to care - Beginning Smalltalk programmers often have trouble because they
think they need to understand all the details of how a thing works before they
can use it. This means it takes quite a while before they can master Transcript show: ‘Hello World’. One of
the great leaps in OO is to be able to answer the question “How does this
work?” with “I don’t care”.
·
Just do
it! - An excellent slogan for projects that are suffering from analysis
paralysis, the inability to do anything but generate reports and diagrams for
what they’re eventually going to do.
·
Avoid
commitment - This is another way of expressing the principle of postponing
decisions, but one which might strike a chord with younger or unmarried
programmers.
·
It’s not
a good example if it doesn’t work - This one comes from Simberon's David
Buck (david@simberon.com), who’s fed up with looking at example and test
methods that haven’t been properly maintained as the code evolved. I can’t
think of a way to apply this on to life, but it’s good advice anyway.
·
Steal
everything you can from your parents - A principle for those trying to make
effective use of inheritance or moving into their first apartment.
·
Cover
your ass - Like in a bureaucracy, the most important thing is to make sure
that it isn’t your fault. Make sure your code won’t have a problem even if
things are going badly wrong elsewhere.
My original byline for this stated that "Alan Knight avoids responsibility with The Object People", but nowadays I'm, well, um, a manager at Cincom Systems. I'm trying not to think about the OO design implications of that.
This is an excellent article. I really like the initial illustrations and the rules, as stated, are bang-on.
ReplyDeleteI have been around every conceivable block in my programming journey, in terms of exploring computing paradigms.
Now I'm getting into Smalltalk 'for real' and finding that the OO-ness of it is not even the main thing I find compelling: it's the live-ness of it. It's just easier to think about the abstractions 'in the present' as it were.
I think Smalltalk takes the idea of live objects to such a level of sophistication that most people can't quite grok the Platonic Forms of domain modelling that swirl around the mind of an accomplished Smalltalk developer.
The language and its implementation lack only one or two things from my POV, namely, pattern matching (in the functional programming sense, which I find so useful it hurts to be without it) and some form of advanced parallelism and concurrency support -- ideally support that is not too conscious unless a developer wants it to be. I actually love the 'share nothing' philosophy of functional programming and its attendant 'referential integrity' concepts.
But folks who have come to think OO is more buzzword than reality could not possibly have tried Smalltalk, not 'for real,' let alone tried to get good at it. It's not just a language or even a platform but a way of thinking about reduction of a problem to its essence, as this article makes clear.