Hexagonal Architecture Design

The hexagonal architecture helps to build robust and change friendly code. I have used multiple different architecture paradigms over the last 20+ years of writing software. For me the hexagonal approach is the best when it comes to modern software engineering. It can be used in all programming languages and helps to have a sustainable software development effort over a long time.

Summary

Overview

The hexagonal architecture has many representations. It is usually displayed as a hexagon that has three layers. Next to the layers there is an east-west/left-right split.

The outer layer is adapting the business logic to the outer world. This world is the realm of protocols and interactions like HTTP, AMQP, CLI commands on the left and data sources and other protocols like databases and HTTP APIs on the right.

The business logic is the core of the application and contains pure logic and data models. They must be written to be totally independent of any external influences. In order to interact with the outside world interfaces and adapters have to be used.

Is there a pure hexagonal design?

The question is how would we measure the purity of the design? In my eyes there is only one way of measuring it, check if the core business logic needs to be changed in case a new outer layer is introduced or replaced for a new database or protocol.

In case the answer is yes, then it is not a pure hexagonal design.

So when are we allowed to change the core business logic?

Ideally you don’t change it but add all new business logic as “add-only” code. That is to avoid breaking the system logic. If it can’t be avoided to change the logic is allowed if the nature of the application demands it.

Can one derive a hexagonal design from first principles?

I think you can. The very well known input-process-output (IPO) model. Is known for decades for any new programmer. You already see the similarity of this model with the hexagonal design model. So what is the difference?

The IPO model doesn’t expect the program to change. If we did expect the input and output to change, e.g. different formats, we would add interfaces to be able to have multiple implementations.

Why would someone change the input and output but not the core business logic might be your thought. Lets take a very familiar program as an example. The UNIX program grep (being explicit NOT the gnu/Linux grep), it is not hexagonal, it is actually tightly coupled to the input and output. It follows the UNIX philosophy of doing one thing and doing it well. Looking at a OpenBSD 7.3 it has about 2k lines of C code. Here we clearly have no application for changing the input and output e.g. extending grep to be able to filter databases or JSON documents. Instead we use separate programs like sqlite3 or the jq util. We will come back to this later.

Since we are talking about operating systems already, they are also a very good example for the application of the hexagonal architecture. Some might give up a clean design for performance reasons. But usually there are multiple different input devices e.g. keyboards via multiple transport protocols (e.g. USB, Bluetooth) and multiple output devices like a terminal, screen, or serial ttys. The operating system abstracts the input and output devices so that it can run the core business logic our application usually (e.g. a shell) independent of the environment.

Modern business applications usually have similar problems. They present a core of business logic that has to use technical infrastructure like databases, queues and object stores that have to be offered to the customers in a flexible manner e.g. for mobile apps, email interaction, web pages, CLI tools, APIs, etc. Like the kernel, the business application has a changing environment with a stable core. This is the main application for that pattern.

Is it new, should I use it all the time?

No, we can be thankful that people write down patterns and teach them so that others have an easier time solving a particular architecture pattern. The example of the kernel for the operating system already showed that this pattern is very old and recognized as a good idea since the beginning of computing.

If you use the pattern all the time without reflecting on the use case before you are doing it wrong. In the next part I will reflect on the problems that can arise from the use of that pattern.

Can it be harmful?

Yes. I think it can be, the same way that we see great use and value with this pattern the same way it can be harmful.

I want to be more specific. Think about the kernel that we looked at earlier. Kernels for the most part became big monoliths. This is not coincidental. The cause is the nature of the hexagonal pattern. All applications I build so far showed the same result. Since the pattern is so flexible you can basically build everything in one big monolithic application. Since you place the protocols on the shell of the application all business logic changes are usually done in the application itself.

So why is this a problem? Large monolithic applications have multiple disadvantages. They are mostly:

  • No separate failure domains, crash will usually break the whole app
  • They are hard to understand, due to the many use cases and i/o interactions
  • They are hard to scale to a large number of contributors
  • They can not be deployed independently

These doesn’t mean you should not be doing it. The discussion is as old as computing. Usually referred to as micro vs. macro service/app/kernel. The answer is today as always: It depends.

A small team with good focus might be more productive and effective with only one application. Over time and with growth it is only difficult to split up such an application.

How to avoid the monolith issue?

In my experience it is important to frame the scope for the application beforehand. E.g saying this appellation does XYZ and than stick to that. One has to be rigor and start new applications for other focus areas. An example for focus areas is: identity, billing, crm, …

This approach will create many smaller well build applications based on the hexagonal design or not. You can pick and choose the best pattern or trade-off (e.g. for performance) for your needs.

Caveat of this approach is, that it is not straight forward to share one and the same i/o with another application. This usually means putting it into a shared library. That will then be used by multiple services. This can create the well known anti-pattern of the distributed monolith.

TL; DR

The hexagonal architecture is an old well established design pattern. It shows of the main value, when we need to adopt the core business logic in new i/o environments.

It also presents dangers in the sense of the monolithic code that results from it. So sticking to one main task is key. While keeping common code in shared libraries.