Software Development is a Strange Profession

Software development is similar to other creative professions in many ways. For example, working with buildings as an architect involves planning, recognition and application of well-known patterns, problem solving and so forth. But software development is different from most other professions in one very important way.

Imagine you are a painter, an architect or a musician looking for a job. What kind of information would you supply in your job application? Obviously, you would show proof of your skills (photographs of paintings or buildings or recordings of music). As a software developer, what do you show? An application?

Assume it’s a very nice application (or a very bad one). Sure, you can draw some conclusions from the externally observable quality and behavior of an application. But development is normally a team effort, your code might be perfect while other’s isn’t and vice versa… Showing an application also assumes it can be shown. Your stuff might be a deeply integrated in a backend server somewhere, or worse.

A friend of mine brought up the example of an electrician. After a job well done, the wiring is normally not visible, somewhat like the programmer’s. Nevertheless, the electrician can still take photos before walls cover up their work. Unfortunately, the code you write is normally a well-kept secret of the company you work for (and might be off-limits in terms of photography :). The problem is, as a software developer you have nothing to show! This is not only a problem for you, but also for people trying to hire you.

As I’ve said before, your CV does not say much, since experience is not really an indicator of quality code. Admittedly, there are ways to prove your skills: doing a test task, participating in open source software development, creating a portfolio of sample code to show etc. But this doesn’t change the fact that software development is a strange profession. Can you think of a profession that has the same problem?

Software Architecture Built to Survive Change

How do you create an object-oriented architecture that can survive change? We discuss how to isolate the parts of your system that never changes, while still making it easy to add new functionality.

What is it that makes some software architectures fragile, wavewhile others are solid as a rock? Recently, I have come across a very appealing idea from multiple sources. First, a colleague of mine brought it up (hi, D!), then I read about it in a book “Lean Architecture for Agile Software Development” by James Coplien and Gertrud Bjørnvig. The book introduced me to the DCI software architecture, which I might cover in more detail in a later post (what I describe here is just parts of it). Learning a new idea is like learning a new word: once you know it, you see it everywhere.

Let’s walk through a simple (and highly contrived :) example. Assume we have a bank system with users, accounts, transactions, transaction logs etc. We have a use case to implement: pay interest to user’s account. Without giving it much thought, we could add a function addInterest to the Account class which calculates and deposits interest and updates the transaction log. Given a few hundred of these types of features, your Account class will be cluttered with functions. As you can imagine, this will eventually lead to poor readability, increased risk of breaking working code and problems parallelizing work. Over time, your code might complex enough to prohibit any kind of progress. Actually, this is the path many projects go.

Let’s try a different take. In the terminology of the “Lean Architecture” book, we want to separate “what the system is” (e.g. accounts, users) from “what the system does” (e.g. pay interest, transfer money). We create classes Account, TransactionLog etc. and define simple interfaces IAccount and ITransactionLog. The interfaces expose only administrative operations like getters/setters and add/remove functions (but no behavior or use case logic!). Among other functions, IAccount might expose addMoney while ITransactionLog might expose addLogEntry. We then implement our use cases as “algorithms” that operate on the interfaces. For example:

interface IUseCase
{
    void execute();
}

class PayInterestUseCase implements IUseCase
{
    PayInterestUseCase(IAccount account,
        ITransactionLog transactionLog) {...}

    void execute()
    {
        double interest = calculateInterest(account);
        account.addMoney(interest);
        transactionLog.addLogEntry(account.getId(),
            interest);
    }
}

Later, when we add a “money transfer” use case, we already have the addMoney and addLogEntry functions. We create a new use case class TransferMoneyUseCase. We withdraw money from one account (e.g. by providing a negative number to addMoney), deposit the money in the other account and update the log. The Account/TransactionLog objects and interfaces do not change at all, but we can still extend the system! After a few use cases, we have a set of stable interfaces on which we can build almost anything. Changes and additions to the interfaces will be fewer and fewer.

To me, the concept of separating domain objects (what-the-system-is) from business logic (what-the-system-does) is very appealing. One changes slowly, if at all. The other will evolve and change all the time. I like it. What drawbacks/benefits do you see? Have you tried it? Leave your comments below.