Tagged: SOLID

Interface Segregation Principle in Software Design

ISP, not Internet Service Provider, but Interface Segregation Principle is the last of the famous principles of SOLID object-oriented software design. It was introduced by Robert C. Martin in his series of articles in 1996. Intention of this principle is to avoid creation of “fat” interfaces.

A fat (or polluted) interface comes from extending current interface with some functionality that is useful only to a subset of entities that depends on it. This phenomenon leads eventually to creation of dummy methods just to be able to use the interface. And that’s bad. Dummy methods are dangerous and also violate the LSP. The ISP as wrote the author declares that

Clients should not be forced to depend upon interfaces that they do not use.

Each interface should have clearly defined purpose and make reasonable abstraction of a part of the current problem. The best practice (in my opinion) is to use multiple inheritance when implementing the interfaces. This method will separate things that don’t logically belong together on the abstraction level and clean wrong dependencies in our code. But it also allow us to couple them back together in objects, that cover multiple things and work on the same data.

Let me show an example of how it should not look like. This is an interface for a car.

/* Bad example */
class CarOperation
{
    public:
        virtual void steer(int degrees) = 0;
        virtual void pullHandbrake() = 0;
        virtual void accelerate() = 0;

        virtual void shift(int gear) = 0;

        virtual void toggleAirConditioning() = 0;
};

There are a couple common things you can do with a car. Every car usually has a steering wheel, an acceleration pedal and possibly even a handbrake. But what about those cars with automatic transmission? They don’t allow the driver to shift gears, so what should they do with the shift method? The interface enforces it’s implementaion. And again with air conditioning. Some cars don’t have an air conditioner. The way here is to split CarOperationinterface into a couple smaller ones.

class BasicCarOperation
{
    public:
        virtual void steer(int degrees) = 0;
        virtual void pullHandbrake() = 0;
        virtual void accelerate() = 0;
};

class GearboxCarOperation
{
    public:
        virtual void shift(int gear) = 0;
};

class AirConditioningCarOperation
{
    public:
        virtual void toggleAirConditioning() = 0;
};

class AlfaRomeo166 : public BasicCarOperation, GearboxCarOperation, AirConditioningCarOperation
{
    /* Implementation of all the interfaces. */
};

class SkodaFavorit136L : public BasicCarOperation, GearboxCarOperation
{
    /* No air conditioning for old cars. */
};

The clients that will use the concrete cars won’t look at them directly as AlfaRomeo166 or SkodaFavorit136L. They will operate them through the interfaces. If some client function wants to turn on a air-conditioning it will look like this

void beCool(AirConditioningCarOperation* vehicle)
{
    vehicle->toggleAirConditioning();
}

That’s the beauty of interface segregation principle. You get exactly what you need, nothing more and nothing less, which makes the code easier to maintain, reuse and saves you from a cascade of unpredictable errors, when you decide to modify existing code.

Sources

Advertisements

Dependency Inversion Principle

DIP or Dependency Inversion Principle is yet another guideline for the software designers that work in object-oriented environment. It’s the D in SOLID and it has one huge advantage over the other principles: in case it doesn’t work for you, you can always get some tortilla chips to help (they work wonderfully with dip ;-)).

This principle was introduced by Robert C. Martin in his article in 1996. He points out that the usual way of dependency design among software project is to make general high-level modules dependent on the low-level utilities and mechanism that do the hard (and in most cases also not very interesting) work. This way of dependency makes the high level modules very hard to reuse without many modifications (and people often thing “why the hell didn’t I wrote it again”). And this is wrong.

The high-level modules are key part of the application. That’s where the heart of the application actually is. The algorithm that knows how to use the lower-level modules to achieve the desired functionality of our application. And we want to reuse that without having to modify every third line, so what do we do?

Mr. Martin proposes the Dependency Inversion Principle, which says

A. High level modules should not depend upon low level modules. Both should depend upon abstractions.
B. Abstractions should not depend upon details. Details should depend upon abstractions.

It’s a little tough one to understand at first, so let me explain. The principle states, that there should be some additional layer between high and low level modules — the layer of abstractions. The author says, that there should be an interface (or abstraction) defined between those two modules on whom should both depend. That way high level modules don’t work directly with the low level classes. Low level classes implement the interfaces. In case you’d like to take some module out and use it elsewhere, you don’t need to touch anything inside that module. You simply take it out and implement the interfaces upon which the module depends. Isn’t that awesome?

The second part (part B.) makes clear that the abstractions (or interfaces) should not be designed according to the low level modules (the details). That’s something that might come naturally to a lazy coder “yeah, I’ll just duplicate the header file, make all methods pure virtual and I’m good to go”, no. The interfaces have to be implemented on the same level of abstraction as the high level module otherwise they’re more than useless.

Example of Dependency Inversion

That would be the principle in theory. Let’s see some examples from user interfaces. We’ll have a Window class with two buttons.

class Button
{
    public:
        void makeVisible();
};

class Window
{
    Button* okButton;
    Button* cancelButton;

    Window()
    {
        okButton = new Button;
        okButton->makeVisible();

        cancelButton = new Button;
        cancelButton->makeVisible();
    }

};

The problem here is, that if the Button implementation changes, we’ll have to go here and change the constructor as well. We don’t want that, because the Window class were a subject of a lot of tests, it passed and any additional messing around in it might introduce errors into the class. Using the abstraction layer the situation would look like this

class IButton
{
    public:
        static virtual IButton* getInstance() = 0; // factory method
        virtual void show() = 0;
};

class Window
{
    IButton* okButton;
    IButton* cancelButton;

    public:
        Window()
        {
            okButton = IButton::getInstance();
            okButton->show();

            cancelButton = IButton::getInstance();
            cancelButton->show();
        }
};

class Button : public IButton
{
    public:
        void show();
};

Now, as you can see, there’s an interface IButton and both Button and Window depend on this interface. And that’s the dream. You can take the window and the interface place into an another application, implement the interface and you’re good to go! Note the factory method I used to be able to get the correct instance of buttons.

Sources

Liskov Substitution Principle

Another principle of object-oriented software design, the L in SOLID, the Liskov Substitution Principle! But first a little background and some theory (feel free to skip right to the practical part of the post). The principle is called after Barbara Liskov, who initially introduced it in 1987. Prof. Liskov first defined it like this

What is wanted here is something like the following substitution property: If
for each object O1 of type S there is an object O2 of type T such that for all
programs P defined in terms of T, the behavior of P is unchanged when O1 is
substituted for O2 then S is a subtype of T.

This definition is a little other way around. It says (at least I read it like this), “if you can substitute each O1 with some O2, it’s safe to say, that S is a subtype of T”. But there are cases (not that rare) in which S is subtype of T, but the objects aren’t substitutable (and that’s bad). It was later rephrased and published in a paper by Barbara Liskov and Jeannette Wing in 1994. The formulation changed to

Let q(x) be a property provable about objects x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T.

This is a little better. Now it says that if something works for objects of type T, it should work for objects of type S as well in case that S is a subtype of T. And that’s the Liskov Substitution Principle. Robert C. Martin later described it in one of his articles like this

Functions that use pointers or references to base
Classes must be able to use objects of derived classes
Without knowing it.

Now, that’s how most of us like our software principles, right? No weird letters or signs — straight-forward and easy to understand. I like the theory though :-). Anyway, how is this good in practice in the regular everyday coding? Let’s have a look at some examples. One of the most typical violation is Square-Rectangle class hierarchy.

One would think that having a Square class that is a subclass of Rectangle could be a good idea, right? The relationship represented Square is a Rectangle seems to work, so let’s do it.

class Rectangle
{
    int width;
    int height;

    public:
        int getWidth() { return width; }
        int getHeight() { return height; }

        virtual void setWidth(int value) { width = value; }
        virtual void setHeight(int value) { height = value; }
};

Pretty straight-forward declaration. The square is a rectangle which width and height are equal. So we redefine the set methods.

class Square : public Rectangle
{
    public:
        void setWidth(int value)
        { width = value; height = value; }
        void setHeight(int value)
        { width = value; height = value; }
};

This modification will make sure, that our square has always all sides equal. Then consider having a function like this

bool test(Rectangle &rectangle)
{
    rectangle.setWidth(2);
    rectangle.setHeight(3);

    return rectangle.getWidth() * rectangle.getHeight() == 6;
}

This function tests the interface of Rectangle. But what happens when you pass a reference to a Square to it? It will break, because of the side effect of the set methods, that keeps the Squarea square. So, where’s the problem here?

The square is a rectangle, but does not share the same behaviour. And that’s a deal-breaker when it comes to inheritance in software. The LSP clarifies that in OOD the is a relationship applies to public behavior of objects.

Bertrand Meyer also explored the topic in Design by Contract. He states

…when redefining a routine [in a derivative], you may only replace its
precondition by a weaker one, and its postcondition by a stronger one.

Preconditions are something that must be true in order to the method to execute and postconditions are always true after the method has been executed. This rule really helps me when I design something. Basically it says, that you can only reduce the set of preconditions and only extend the set of postconditions. In other words, the new routine cannot require anything more than the original one (but can require even less) and cannot yield anything less then the original one (but can also return something on top of that).

In context with the square-rectangle problem, there were no preconditions, but there was one postcondition to the setHeight() method. The postcondition assumed, that the set method for height won’t change the width (that’s a perfectly justified assumption). And this precondition was broken by the redefined routine of Square.

Inheritance is very powerful and important concept in object-oriented design. But it’s also easy to get it dead wrong. The Liskov Substitution Principle should make you thing more about the relationship, when you create one and help avoid eventual oh-moment coming your way.

Sources

Single Responsibility Principle

Single responsibility principle, or SRP is another of the SOLID guidelines for software designers. It’s especially useful in object-oriented design. The name suggests, that it will have something to do with decomposing the problem up to the point, where each entity in the system has one and only one responsibility. The principle alone states,

“There should never be more than one reason for class to change.”

Right, but where, the heck, is the responsibility we’re talking about? You see, a responsibility can be pretty hard to define and using the word directly would have definitely started a couple of fights. So the author defined it precisely as a ‘reason to change’. Let’s have a look at some example.

Here is a definition of my old MySQL class. It has interface for establishing and closing connection to a remote MySQL server and sending a query and receiving and processing the query result.

class MySQL
{
    public:
        bool connect();
        void disconnect();

        bool executeQuery(std::string queryString);
        MySQLResult* getQueryResult();
};

This class  has two reasons to change (i.e. responsibilities). It handles the initialization and closing of a connection to database server and also communication with the server (executing SQL queries). The two reasons to change are:

  • MySQL server will now accept only encrypted connections
  • The server implementation changes and it will respond differently to some queries

This violates the single responsibility principle. It would be a bad design to put together two things that change for different reasons. It might not seem that bad now, but the system will evolve and change. What now seem reasonable solution might kill you later on. The way I would now design things is this:

class MySQLConnection
{
    public:
        bool open(); /* former connect() */
        void close(); /* former disconnect() */
};

class MySQLQuery
{
    MySQLConnection* session;

    public:
        bool execute(std::string queryString);
        MySQLResult* getResult();
};

While SRP is fairly simple principle, it’s pretty hard to get it right. Putting responsibilities together is something that comes naturally to us and the separation (e.g. splitting the class into several smaller ones) might not seem as elegant at first. When I look back at some of my earlier designs, well, to be honest, I rarely stumble upon a class that conforms to this principle. When I look again, I can really see, how would the separation help reduce the complexity of the design and made my code easier to read and understand.

Following this principle religiously is definitely not a good idea, but it’s good to know it’s there and sometimes (especially, when you see a 500 lines in my_class.h) think, ‘Hey, would splitting my class to a couple more help?’. Usually it does :-P.

Sources

SOLID Object-Oriented Design

What is a solid object-oriented design? Like, strong, steady, you know or not a liquid or something? Well, who knows? Unless it’s written in CAPITALS! In that case it’s a acronym introduced by Robert C. Martin in the early 2000s which stands for five basic principles of object-oriented programming and design. What are these principles?

The principles (or guidelines), when applied together intend to make it more likely, that a programmer will create a system, that is easy to maintain and extend over time. These guidelines are here simply to make our lives a little easier. They’re certainly not to be followed religiously (including laying a fierce wrath upon anyone who dares to break them). If you find out, that your design abides them, good for you! If it doesn’t and you know the reason why, it’s no problem either :-).

I find them very helpful, especially when it comes to evaluating my work and that’s why I decided to take a break from design patterns for a while and go through the SOLID set of principles first.

Sources

Open/Closed Principle in Software Design

Open/Closed principle or OCP is one of the guidelines that help software developers achieve high quality software design. Well, it’s actually pretty hard to tell, what exactly does the term high quality software mean. Back to the OCP. Bertrand Meyer is credited as having originated the term Open/Closed Principle, which appeared in his 1988 book Object Oriented Software Construction. It goes like this:

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.

What does it mean? When you design some piece of software, it’s vital to keep in mind possible places for future extensions. Let’s face it, the customer specifications change with a speed of a guy, who just found out he had accidentally drunk a whole bottle of extremely effective laxative. So your code is most certainly going to be a subject of change and extension. And as we know, it doesn’t end well (in either case).

What the principle implies is, that you can think a little forward (build for today, design for future, right?) and design your software so no changes are necessary when it comes to adding a new features and functionality. Let the code speak for itself:

def area(geometric_entity):
    if geometric_entity.type() == SQUARE:
        return geometric_entity.a * geometric_entity.a
    elif geometric_entity.type() == CIRCLE:
        return PI * geometric_entity.r * geometric_entity.r
    else:
        raise UnknownEntityError("I literally have no idea.")

This is a really dumb example in the first place, but it shows the key aspect of OCP. If you decide, that it’d nice to have your neat area function work with triangles as well, you need to go here and add another elif clause. Then others will come with requirements on other geometric entities and before you know it, this 6 harmless, poorly coded lines will turn into 1500-line Riemann integral solving monster (seen it happen). And if the code monster won’t eat you, your project manager definitely will …

The point is, every time you change something in your software, something else can go wrong (and according to Murphy’s laws, it usually does). There are unit tests, which are designated specifically to discover such errors (if you don’t even write unit tests, you’re either doomed, a superman or a fellow, who enjoys a quality time with a gallons of coffee and GNU debugger). But even unit tests don’t get everything and it’s generally a good idea to avoid poking the bear, if you absolutely don’t have to.

Software quality might be a little subjective and generally hard to define formally. So are, as the software quality, this guidelines (that’s right, not rules). They cannot be enforced unconditionally. But it’s definitely good to know, that they’re there. And they might help you see flaws in your design before it’s too late and save your ass from getting fired :-P. Or not. Anyway, Open-Close principle is not the only one, so stay tuned for more premium 95 octane knowledge (ok, I probably watch How I Met Your Mother waaay to much lately).

Sources