Incremental delivery: forget the database
Have you worked on a project that had a big switch on deployment or migration? The complexity of such an undertaking is fraught with risk, regardless of the quality of testing. These “big bang” deployments are stressful experiences for the entire development team. They suffer the unfortunate consequence of leaving the most stressful tasks to the point where time is most limited.
Incremental delivery takes a different approach, emphasising the delivery of the smallest possible chunk of value as soon as possible in order to gain fast feedback. The fast feedback can then be used to rapidly correct any false assumptions and bugs. This approach does not, however, excuse releasing software that is not fit for use. The value released might not be everything that needs to be achieved in a project, but it delivers some working functionality. Continuous delivery practises align with incremental delivery as they provide mechanisms for reliably deploying in a consistent way with an aim to reduce time from code complete to release.
Incremental delivery alleviates many of the issues associated with large deployments. Nevertheless, there is a price to pay for delivering projects this way. For example, there may be scaffolding work, feature toggling and even small pieces of functionality to be implemented which get removed in a subsequent release. I would argue though that the fast feedback loop makes this worthwhile. The alternative has a high risk and high human cost associated with it. Kent Beck puts this well in his book, Extreme Programming Explained: “This scaffolding, technical or social, is the price you pay for insurance.” Incrementally delivered projects are bumpy initially but become smoother with time, whereas we see the opposite with big bang deployments.
I’ve benefited from this approach with projects I’ve led at MarketInvoice. One of these projects was a direct debit provider integration which consisted of three parts. The first part involved setting up direct debit mandates during customer application. The second involved identification of received funds while the third was automatic movement of funds to investor and MarketInvoice accounts.
Initially, the focus was on mandate setup during application. This was structured as thin slices of functionality – despite the temptation to jump straight into building out things we would surely need. On closer inspection, those things I thought we would need could be deferred to a later time without impacting functionality. Deferring database scaffolding reduced the upfront workload and users were able to benefit from the integration much sooner. When the priorities shifted towards reconciliation and money movement before we could complete all the work for application, we were able to transition to the new priority having delivered value. I’m confident that if we had spent days working on the persistence layer, instead of delivering value then, all the work up to that point would have been abandoned.
This process was also useful for a project handling incoming payments across our trust accounts. It involved moving large sums of money in an automated way and was therefore a higher risk project. The features were delivered incrementally. We first released the ability for our Operations team to move funds from incoming payments through a basic user interface, which allowed them to verify small money movements in production. Subsequent releases allowed other admin functions to be performed against payments and improved the user interface.
Operations didn’t use the system across their team until the final user interface changes were completed so you could argue that no value was delivered through those increments. That would be mistaken though as the ability to gain production feedback was invaluable given the associated risks. It allowed us to test early and get guidance for further iterations.
As software engineers, we’re used to thinking horizontally at the current level of abstraction. It’s logical to build out a data layer to handle a direct debit application journey. A mindset shift is needed to switch to a slice of end-to-end functionality. Horizontal thinking around delivery is similar to the problem we see caused by early abstraction in code. If we try to abstract too early in anticipation of further use cases, often we find the abstraction has assumptions built in which end up not being fit for purpose when we try to reuse.
A more practical approach is to allow abstractions to emerge through effective refactoring as part of the usual TDD cycle. By focusing on the vertical, we allow ourselves to deliver a thin slice of functionality to completion without any superfluous code. This efficiency is broadly aligned with lean manufacturing principles, namely the reduction of waste in the form of overproduction and unnecessary movement, which improves flow and therefore quality.
Unfortunately, our frameworks don’t set us up for success when trying to keep the persistence level separate from our domain. As Bob Martin points out in his book Clean Architecture, the database itself is an implementation detail but data access frameworks emphasise row objects. This couples business rules and even the UI to the underlying data structure.
How can we prevent slipping into such a natural pattern of delivery? I would suggest starting by testing it out. You could try delivering a project you’re working on now incrementally, or you could try out an exercise such as the excellent Elephant Carpaccio. The benefits become clear while doing so and this will make it easier to stick to.
A more objective approach is to monitor how frequently you’re delivering to your end user and how large these chunks of work being delivered are. If branches are open for many days, then it’s likely that there’s an issue with the way the work has been split. Also, ask yourself: if we were to stop right this moment with an ongoing project, is anything of value out in production? It’s also necessary to ensure the process of delivering is streamlined and reliable to encourage flow. If it takes days to release to production then you’re in a position to fail.
It’s very important to invest in continuous delivery practises. When you’re looking at persistence, ask yourself: is everything absolutely necessary for the slice of functionality to work correctly? Can anything be deferred to a later stage? There may be reporting obligations, but learn what they are and deliver them as a slice of work rather than persisting for its own sake, just in case something is needed.
If you’re keen to chat about incremental delivery or just want to know more about Tech at MarketInvoice, email firstname.lastname@example.org
TL;DR – Forget the database and persistence until absolutely necessary – only implement the smallest possible schema to support a vertical slice of value to the end user. Don’t allow your ORM (object-relational mapping) framework to pollute your data model with row objects. Deliver thin verticals of end-to-end functionality to your users often and get fast feedback!