We run distributed systems.  Some of the systems are in AWS, some are in Linode, DigitalOcean, Vultr, Hivelocity and internal, and maybe some one-off vendor some where.  So, how can we keep all these guys up-to-date on their necessary configurations?  I’ve got a bit of history and then details on a method we’re trying.

System Configurations

On one hand, system configuration, we can maybe use a thing like cfengine, Chef or Puppet or others.  Those are pretty awesome but we’ve had some difficulties in such heterogeneous deployments.  Many times we resort to having shell scripts to solve the problem.

Application Configurations

Secondly, we have these App configs too.  Run-time parameters but also stuff that is rather baked in, like SQL or Redis hosts, or credentials these systems, or external APIs (and/or API endpoints).  Mature (read: Legacy) systems have not been so easy to integrate to recipes that involve other configuration automation tools.

VMs and Docker?

Those are great for standing up the environment or deploying a new VM/instance or the like.  And for newer projects it’s not been that easy to breakout and re-compartmentalise.

Get Configgy Wit’it

We run a config host over HTTPS.  It’s simple, dead simple.  The whole thing is easy to integrate into our existing legacy applications.  Very nice.  A basic REST style API is available for hierarchical data.  There is a very simple UI to modify the values.

From any existing code, and/or scripts their access to this data is trivial.  It’s very easy to migrate operational parameters to our new centralised system.  These incremental changes are much easier to manage (dur).

curl --header 'Accept: text/plain' 'https//cfg.edoceo.com/app/foo/redis-link'
red1.sfo.edoceo.lan,red2.chi.edoceo.lan

That’s a simple shell script getting the location of some Redis systems.  It’s just storing key/vals over HTTPS accessible as paths.  API calls can report multiple key/val per request, so you don’t need 100 queries to get 100 parameters.

In most environments we capture this data and store in memory only (shm, memcache).  This way, operational and possibly sensitive data is never stored in code.  So far it has been a bit easier than migrations to “bigger” systems such as Chef, Docker and so-on and it’s become far easier to config dozens of systems and applications.