1. Expectations of modern applications
Almost everyone who is in the business for either buying or selling applications use buzzwords like “Cloud application”, “OS agnostic”, “Micro-services”, “High-available”, “Containerized” etc. etc.
The developers on the other hand often have to cope with the ever-changing field of terms, standards and expectations, and still maintain the same software year after year.
One aspect of this maintenance is the ever troublesome application configuration, and where to store it.
How the older apps store configs
Since the beginning developers have stored the configuration in local files deployed with the application. This was maybe one of the easiest solutions to the problem of changing things for the app, without actually re-building it completely. It made changing some aspect of the app easy, as all you had to do was change the config file, and restart the application.
The 24/7 uptime application
All was well and everyone seemed to be content with this for a rather long time. Then, at some point business started demanding High Availability, and 24/7 uptime. This lead to having at least 2 instances of the same application running at the same time, and introduced a problem with the configuration for the app.
The config was now split into two places, with exactly the same information. Or rather exactly the same information that was not INSTANCE specific.
So updating the config meant one had to shutdown both instances, change the config and then start them again. This of course meant a small maintenance break for the whole application, which again was not in line with businesses 24/7 requirement.
The micro-services and CI processes
As time went by lot’s of people invented many different solutions to the problems. Some better than other, but all of the things happening lead us slowly but surely to the micro-service pattern where monolithic applications are split into smaller pieces that often work almost independently from the rest.
This enabled developers to minimize the downtime, or to even eliminate it as many instances of micro-services were running at the same time, and could be updated one-by-one ensuring the service was always available.
These instances can even exist in Kubernetes as pods or similar systems, where the actual instances are handled by a 3rd party and the developers just say how many replicas they want running at the same time.
But what about the application config? In many cases it somehow still lurks there with each instance as a local file?
When developers enter the micro-service era many of them have trouble stepping back and looking at what else will change with the micro-servicification of the apps. The app will be split into a set of services. These services are independent of each other (in code/config sense) and do not really have a strong relationship anymore. Some may depend on each other, but via REST API’s instead of specific classes.
So micro-services also need to find each other, something covered by service discovery. In a very similar manner, configurations can be discovered via configuration discovery.
2. Configuration Discovery
The process of configuration discovery is actually quite easy, and only has a few steps:
- Locate the centralized configuration service (via service discovery !)
- Ask for the applications configuration, for a specific environment
- Apply the configuration, and continue normally
- Periodically re-load the config (steps 1-3) and apply it if it has changed
Also note that not all config parameters need to be centralized. Only the ones that affect ALL the instances equally across all instances.
There exists concepts such as Puppet or Chef but these solutions tend to extend the local config file by PUSH:ing it onto the local machines. This approach does not really work well when one has hundreds of instances coming and going, especially in Containers like Docker.
Why bother with this centralized config?
Let’s imagine a large setup in many regions and with multiple zones. This whole system could have hundreds of different micro-services, with any from 0 to 50 instances of the individual services running at any time, in any zone.
The problem becomes evident when a configuration parameter needs to be changed for a specific service. I think most developers would agree that re-deploying the service just because we change a config parameter is a rather wasteful operation, not to mention it disrupts the micro-service instances that may have long running tasks that need to finish etc. etc.
So the immediate benefits to Ops are at least:
- Services can use just-in-time config parameters (read it when you need it)
- Allow for config changes to take affect at a certain date & time !
- Can change configs for just specific versions of services
- No more redeploys on config changes
- Micro-services no longer have to be restarted
- Changes are picked up immediately when the micro-services see them!
- Change Management is centralized, and can managed more flexibly
There are many other benefits as well, but in general these are the ones that pop up during discussions about this.
So, what are the problems that this approach brings?
There are a number of downsides of course as well. Nothing is for free in this world.
- There needs to exist a Centralized Management of the Configurations
- This centralized system needs to be high-available, adding one more component to the overall system
- Micro-Services need to be changed (Configuration Discovery added to them)
- 3rd party apps need new deployment scripts to fetch Configurations during deploy / restarts
- Security aspects need to be managed. Who can read or change the configs?
I’m sure people can come up with a whole long list of more problems with this approach, so when you do, also let others know why and how you would mitigate it!
I’m also hoping for comments about solutions that exist to this problem!
There has been very little written about this subject, and even less systems created for this. I’m hoping someone will create a larger system that offers this, but does not include extra pieces like service discovery or CMDB functionality, since they are separate areas and in a modular decoupled world they should not sit in the same app.
But for now I leave you with some of the few links to articles I found on the matter: