How to do the package structure in a Ports and Adapters architecture
Table of Contents
Unless you want to dump all files of a project into a single folder, you sooner or later need to come up with a good package structure for your code.
Most of the time, you do not really put much thought into it, though.
Oftentimes, there are default project structures from npx create-something
or start.spring.io you just blindly follow, putting code where it seems to fit (I know I have in the early days of my career).
And many people join existing projects that already have a structure and when they start another project, they just copy whatever structure they had in the old project.
However, I find the discussion on the folder structure an interesting one to have. After all, this is one of the first touching points that new members of your team (and yourself after a while) will have with the software.
In this post, I want to discuss some of the drivers and possibilities of a package structure. I will do this by assuming a codebase that is organized around the Ports and Adapter Architecture pattern, often referred to as Hexagonal Architecture (I am assuming the reader to be familiar with this architecture style). As a side note, when I had discussion last year with Alistair Cockburn, the inverter of the Hexagonal Architecture, he made it very clear that this architecture style has nothing to do with a package structure at all. The package structure is completely orthogonal to the architecture style. This is not to say that it’s not an important discussion for a team, it just means that it’s a matter of personal taste and not related to the core idea behind the architecture. What I present here is not to be taken as “This is how it must be done”. It is merely my personal opinion.
Also, I will narrow down the discussion to monolithic applications, as Microservices might involve more complex setups because packages and deployment artifacts might be harder to distinguish in a monorepo and might depend on a specific build tool.
Driving questions
Let’s first highlight what we actually want from a package structure.
Who is the “client” of the package structure? The compiler, for the most part, does not care about it. It might care that there are no cyclic dependencies between the packages, but thats it.1 The actual client of a package structure are the programmers themselves. So when we think about a package structure, we must ask ourselves what kind of questions a (new) programmer might have if they see the codebase for the first time. What questions can be answered by the package structure?
Let’s say a developer is working on a bug ticket. The ticket description includes the steps that are required to reproduce the bug from a user perspective. A good package structure should enable the developer to find the involved pieces of code quickly. It should not only allow the developer to find the entry point to an execution path quickly, it should also allow the developer to narrow down the involved code so she/he is not distracted by code that is not relevant.
The top folder
Most (monolithic) code repositories have a root folder for the source code, such as src/
or src/main/java
if you follow the Maven package structure.
This folder is required by build tools and separates the code from other files, such as a readme or license.
But what are the first folders that come afterwards?
Old-school “Three layered architecture” would suggest three folders:
src/
presentation/
logic/
persistence/
Each of these layer folders will now grow enormously as the application gets more features. A more “modulithic” approach would be to find areas of the business domain that are separate from others, Bounded Contexts in DDD terms (although you do not need to do DDD in order to split the code into modules; other methods work just fine as well). Is is almost traditional to explain architecture with the example of an online shop, so here is what a separation according to bounded contexts might look like for an online shop:
src/
catalog/
inventory management/
customer management/
payment processing/
shipping/
(I have a theory that every time business people say something that ends with “management” or “ing”, then that is a good candidate for a context)
Granularity
Oftentimes, developers follow what I call “Noun-driven-development”: Everything that involves an order should be located in the Order
class.
This creates “god classes” that implement every use case that involved a certain noun, resulting in extremely large and hard to understand classes that cover multiple execution paths.
Its better to split the code into smaller pieces that, even though they are about the same “thing”, are independent from one another.
This might result in more code, because independent code must sometimes duplicate logic or data structures.
But this independence allows you to write more simple and focussed code.
The question is how to split the code and where to put the pieces.
Package by feature
One way to split code is to “package by feature”, as described in this blog post by Philipp Hauer. Here is a code sample from Philipp’s blog post:
├── feature1
│ ├── Feature1Controller
│ ├── Feature1DAO
│ ├── Feature1Client
│ ├── Feature1DTOs.kt
│ ├── Feature1Entities.kt
│ └── Feature1Configuration
├── feature2
├── feature3
└── common
As you can see, code that belongs together is located in the same sub-folder.
This includes use case specific projections into the database, which results in simple code that only deals with one feature and thus, one view on the world (though writing database queries might be more complex).
One critique point you could mention here is that it is not entirely clear what “feature” means.
In the article, Philipp has a feature called userManagement
, but what exactly does this mean?
What actions can a user take here?
What executions paths are involved?
It might be difficult to find the correct entry point, but it is much easier to reason about than the Three Layer approach.
Also, notice that there is a common
package.
Even though code duplication is favored over inter-dependencies between features, there might be cases where sharing code might make sense.
As Philipp writes, we should be not be dogmatic about these kind of things.
Package by use case
Taking the idea of “package by feature” further, an even more granular approach is to “package by use case”. This works in a similar way, putting all classes (or files) that are involved in a single use case into a single folder. If you started with an Event Storming session, or you split use cases according CQRS, then this packaging scheme fits nicely, because you can directly translate commands and queries into packages that do one thing. Here is an example:
addItemToCartUseCase/
AddItemToCartController
AddItemToCartPort
AddItemToCartService
PersistCartPort
PersistCartRepository
You might find it difficult to implement this approach with certain technologies, as some frameworks do not permit to create such finely grained controllers if you want to have a REST-style API.
I worked on a project that used a Gradle plugin to generate controller interfaces from an OpenAPI spec file.
All endpoints with the path /cart
were generated into the same Java interface, which made it impossible to use this packaging scheme.
It is also worth noting that this style increases the drawback of code duplication we saw in the “package by feature” approach even further, but maximizes code independence. The whole package serves just one API endpoint, nothing more, which is similar to how serverless functions are packaged.
This approach might also not feasible if you have modeled your domain in aggregates according to DDD. Aggregates maintain invariants across multiple use cases of a bounded context, so you would have to come up with a way to enforce these invariants without retrieving to “god classes”.
Internal structure of a package
So far, the files that are involved in a feature or use case have just been dumped into the same folder/package. Depending on the case, the amount of files might get too large and you might want to structure the files inside a package in sub-packages.
Here is an example from Tom Hombergs' book “Getting your hands dirty on Clean Architecture”:
registration
├── adapter
| ├── in
| | └── web
| | └── BookController
| └── out
| └── persistence
| ├── BookPersistenceAdapter
| └── SpringDataBookRepository
├── domain
| └── Book
└── application
└── RegisterBookService
└── port
├── in
| └── RegisterBookUseCase
└── out
└── PersistBookPort
The classes of the registration
feature are “categorized” by their role according to the Ports and Adapters Architecture.
A nice benefit of this is that you can enforce an architecture with tools like ArchUnit.
This would make sure that no class inside the domain
or application
sub-packages accesses a class in the adapters
sub-package.
Note: The domain
sub-package includes “movable” domain objects, like Aggregates, Entities and Value Objects.
It could also include data structures that are specific for the use cases, like a Command.
This is not necessary when using a programming language that allows to add small data structure definitions in the same file as the use case (e.g. Kotlin or TypeScript).
Higher level code at higher level folders
The following is an adaptation of Tom Hombergs' example above that I made up myself:
registration
|
├── RegisterBookUseCase (implements RegisterBookCapability)
|
├── model
| └── Book
|
├── provides
| ├── RegisterBookCapability
| └── implementations
| └── BookController
|
└── requires
├── PersistBookCapability (aka BookRepository)
└── implementations
└── BookPersistenceAdapter (implements PersistBookCapability)
The business use cases, which should be at the center of any software, are located at the top of the package. Going further down the folder structure, you discover more detailed, more technical pieces of code.
Note that the wording changed a bit here:
The Ports are now called Capabilities
.
I never really liked the word “ports”, although I can see why Alistair did not want to use the word “interface”, as it can be quite misleading in OOP languages.
The term “capability”, in my opinion, explains the concept quite well.
Also, instead of a adapters
package, there are now the packages provides
(representing the driving side of the Ports and Adapters Architecture) and requires
(representing the driven side).
The path of the sub-packages now read like English sentences:
The registration requires the persist-book capability. It provides the register-book capability.
I got the inspiration from the book “Grokking Simplicity”, where the author Eric Normand defines a “layer structure”, but a little different of what you might think. He splits the code not into technical layers, but “higher-level” and “lower-level” code, the former of which involves very little details (like business use cases), the latter of which a lot (like database access). The idea of the package structure is to have the higher-level code in a higher folder than the lower-level code. As you dig deeper into the folders, the amount of details you see increases. This fits together nicely with the Robert C. Martin’s rule that a function or method should only deal with one level of abstraction.
An advantage of this approach is that the resulting structure is “screaming”: The use cases are at the top of the package structure and are among the first things you see if you open it for the first time. Higher-Level code is thus more valued than other code. Also, the implementation of a capability is close to the interface definition.
A disadvantage compared to Tom Hombergs' approach above is that it the folder structure is less aligned with the typical architecture diagram you have in your mind when thinking about Ports and Adapters Architecture. And as I have not seen this type of structure in real projects, it might be unfamiliar with a lot of developers.
Conclusion
Many factors have to be considered when choosing a structure for your source code. A lot of them come down to personal taste, some have to do with the technologies that are involved. I hope I gave you some points to think about for the next time you start a new project. As always, feel free to leave feedback in the comments below.
-
A noteworthy exception to this is Rust, where the packages (called “modules”) form a tree that is mirrored in the folder structure. Every function in a module can reference all other functions within the same module, even when it is not public. As unit tests are part of the same module as the actual code, private functions can be tested without exposing them to the outside world, as one might have to do in Java. You can read more about how Rust’s modules work in this blog post ↩︎
Comments powered by Disqus